ERIC Educational Resources Information Center
Taylor, Maureen; Kent, Michael L.
1999-01-01
Explores assumptions underlying Malaysia's and the United States' public-relations practice. Finds many assumptions guiding Western theories and practices are not applicable to other countries. Examines the assumption that the practice of public relations targets a variety of key organizational publics. Advances international public-relations…
Key rate for calibration robust entanglement based BB84 quantum key distribution protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittsovich, O.; Moroder, T.
2014-12-04
We apply the approach of verifying entanglement, which is based on the sole knowledge of the dimension of the underlying physical system to the entanglement based version of the BB84 quantum key distribution protocol. We show that the familiar one-way key rate formula holds already if one assumes the assumption that one of the parties is measuring a qubit and no further assumptions about the measurement are needed.
The Power of Proofs-of-Possession: Securing Multiparty Signatures against Rogue-Key Attacks
NASA Astrophysics Data System (ADS)
Ristenpart, Thomas; Yilek, Scott
Multiparty signature protocols need protection against rogue-key attacks, made possible whenever an adversary can choose its public key(s) arbitrarily. For many schemes, provable security has only been established under the knowledge of secret key (KOSK) assumption where the adversary is required to reveal the secret keys it utilizes. In practice, certifying authorities rarely require the strong proofs of knowledge of secret keys required to substantiate the KOSK assumption. Instead, proofs of possession (POPs) are required and can be as simple as just a signature over the certificate request message. We propose a general registered key model, within which we can model both the KOSK assumption and in-use POP protocols. We show that simple POP protocols yield provable security of Boldyreva's multisignature scheme [11], the LOSSW multisignature scheme [28], and a 2-user ring signature scheme due to Bender, Katz, and Morselli [10]. Our results are the first to provide formal evidence that POPs can stop rogue-key attacks.
Assessing the viability of the independent practice of dental hygiene--a brief communication.
Beach, M Miles; Shulman, Jay D; Johns, Glenna; Paas, Jeffrey C
2007-01-01
This paper deals with the economics of the independent dental hygiene practice. Using historical data from dental practices in Cincinnati, Ohio, we developed a business model for an independent hygiene practice. We tested the sensitivity of the model to variations in key assumptions (initial capitalization, interest, employee salary, and owner's draw). We described the profitability on the basis of the breakeven point. Under the most permissive regulatory and financial environment, the practice would breakeven after 26 months. However, the owner would not equal the earnings of a salaried hygienist until the initial loan is paid off after 7 years. The model was not sensitive to 20 percent changes in the key assumptions. Under ideal circumstances, an independent hygiene practice could be profitable.
NASA Astrophysics Data System (ADS)
Sisk-Hilton, Stephanie Lee
This study examines the two way relationship between an inquiry-based professional development model and teacher enactors. The two year study follows a group of teachers enacting the emergent Supporting Knowledge Integration for Inquiry Practice (SKIIP) professional development model. This study seeks to: (a) identify activity structures in the model that interact with teachers' underlying assumptions regarding professional development and inquiry learning; (b) explain key decision points during implementation in terms of these underlying assumptions; and (c) examine the impact of key activity structures on individual teachers' stated belief structures regarding inquiry learning. Linn's knowledge integration framework facilitates description and analysis of teacher development. Three sets of tensions emerge as themes that describe and constrain participants' interaction with and learning through the model. These are: learning from the group vs. learning on one's own; choosing and evaluating evidence based on impressions vs. specific criteria; and acquiring new knowledge vs. maintaining feelings of autonomy and efficacy. In each of these tensions, existing group goals and operating assumptions initially fell at one end of the tension, while the professional development goals and forms fell at the other. Changes to the model occurred as participants reacted to and negotiated these points of tension. As the group engaged in and modified the SKIIP model, they had repeated opportunities to articulate goals and to make connections between goals and model activity structures. Over time, decisions to modify the model took into consideration an increasingly complex set of underlying assumptions and goals. Teachers identified and sought to balance these tensions. This led to more complex and nuanced decision making, which reflected growing capacity to consider multiple goals in choosing activity structures to enact. The study identifies key activity structures that scaffolded this process for teachers, and which ultimately promoted knowledge integration at both the group and individual levels. This study is an "extreme case" which examines implementation of the SKIIP model under very favorable conditions. Lessons learned regarding appropriate levels of model responsiveness, likely areas of conflict between model form and teacher underlying assumptions, and activity structures that scaffold knowledge integration provide a starting point for future, larger scale implementation.
Quantum key distribution with finite resources: Secret key rates via Renyi entropies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abruzzo, Silvestre; Kampermann, Hermann; Mertz, Markus
A realistic quantum key distribution (QKD) protocol necessarily deals with finite resources, such as the number of signals exchanged by the two parties. We derive a bound on the secret key rate which is expressed as an optimization problem over Renyi entropies. Under the assumption of collective attacks by an eavesdropper, a computable estimate of our bound for the six-state protocol is provided. This bound leads to improved key rates in comparison to previous results.
Furrer, F; Franz, T; Berta, M; Leverrier, A; Scholz, V B; Tomamichel, M; Werner, R F
2012-09-07
We provide a security analysis for continuous variable quantum key distribution protocols based on the transmission of two-mode squeezed vacuum states measured via homodyne detection. We employ a version of the entropic uncertainty relation for smooth entropies to give a lower bound on the number of secret bits which can be extracted from a finite number of runs of the protocol. This bound is valid under general coherent attacks, and gives rise to keys which are composably secure. For comparison, we also give a lower bound valid under the assumption of collective attacks. For both scenarios, we find positive key rates using experimental parameters reachable today.
When Nice Won't Suffice: Honest Discourse Is Key to Shifting School Culture
ERIC Educational Resources Information Center
MacDonald, Elisa
2011-01-01
The "culture of nice" is the underlying culture that inhibits a team of teachers from reaching a level of rigorous collaborative discourse where teachers are challenging each other's and their own thinking, beliefs, assumptions, and practice. This article discusses how honest discourse can be the key to shifting school culture. The act of…
A Critical Examination of the DOD’s Business Management Modernization Program
2005-05-01
Program (BMMP) is a key element of the DoD’s ongoing efforts to transform itself. This paper argues that the BMMP needs to be fundamentally reoriented...communication role it plays in the defense- transformation effort. Introduction The core assumption underlying the DoD’s Business Management... government activities. That this is a core assumption for the BMMP is borne out by the fact that the program’s primary objective is to produce
Gariano, John; Neifeld, Mark; Djordjevic, Ivan
2017-01-20
Here, we present the engineering trade studies of a free-space optical communication system operating over a 30 km maritime channel for the months of January and July. The system under study follows the BB84 protocol with the following assumptions: a weak coherent source is used, Eve is performing the intercept resend attack and photon number splitting attack, prior knowledge of Eve's location is known, and Eve is allowed to know a small percentage of the final key. In this system, we examine the effect of changing several parameters in the following areas: the implementation of the BB84 protocol over the public channel, the technology in the receiver, and our assumptions about Eve. For each parameter, we examine how different values impact the secure key rate for a constant brightness. Additionally, we will optimize the brightness of the source for each parameter to study the improvement in the secure key rate.
Password-only authenticated three-party key exchange with provable security in the standard model.
Nam, Junghyun; Choo, Kim-Kwang Raymond; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon; Won, Dongho
2014-01-01
Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks.
Commentary: Using Potential Outcomes to Understand Causal Mediation Analysis
ERIC Educational Resources Information Center
Imai, Kosuke; Jo, Booil; Stuart, Elizabeth A.
2011-01-01
In this commentary, we demonstrate how the potential outcomes framework can help understand the key identification assumptions underlying causal mediation analysis. We show that this framework can lead to the development of alternative research design and statistical analysis strategies applicable to the longitudinal data settings considered by…
Password-Only Authenticated Three-Party Key Exchange with Provable Security in the Standard Model
Nam, Junghyun; Kim, Junghwan; Kang, Hyun-Kyu; Kim, Jinsoo; Paik, Juryon
2014-01-01
Protocols for password-only authenticated key exchange (PAKE) in the three-party setting allow two clients registered with the same authentication server to derive a common secret key from their individual password shared with the server. Existing three-party PAKE protocols were proven secure under the assumption of the existence of random oracles or in a model that does not consider insider attacks. Therefore, these protocols may turn out to be insecure when the random oracle is instantiated with a particular hash function or an insider attack is mounted against the partner client. The contribution of this paper is to present the first three-party PAKE protocol whose security is proven without any idealized assumptions in a model that captures insider attacks. The proof model we use is a variant of the indistinguishability-based model of Bellare, Pointcheval, and Rogaway (2000), which is one of the most widely accepted models for security analysis of password-based key exchange protocols. We demonstrated that our protocol achieves not only the typical indistinguishability-based security of session keys but also the password security against undetectable online dictionary attacks. PMID:24977229
Through a glass, darkly: U.S. marriage discourse and neoliberalism.
Marzullo, Michelle
2011-01-01
This article draws together research insights on marriage in the U.S. to argue that over the last 40 years we are able to see an active engagement with neoliberalism in discussions on the subject. Using discourse analysis, I consider how the underlying assumptions that inform the key concepts of autonomy, individualism, responsibility, and universality have been re-semanticized through neoliberal ideology to change the ways that Americans think of marriage (and themselves). In light of these changed assumptions, this article urges a reexamination of the activism and identity politics around marriage as well as further academic research on the topic.
The Perception of Error in Production Plants of a Chemical Organisation
ERIC Educational Resources Information Center
Seifried, Jurgen; Hopfer, Eva
2013-01-01
There is considerable current interest in error-friendly corporate culture, one particular research question being how and under what conditions errors are learnt from in the workplace. This paper starts from the assumption that errors are inevitable and considers key factors which affect learning from errors in high responsibility organisations,…
Mead, Habermas, and Levinas: Cultivating Subjectivity in Education for Democracy
ERIC Educational Resources Information Center
Zhao, Guoping
2014-01-01
For several decades education has struggled to find a way out of the entanglement of modernity, the premises and assumptions under which modern education has operated. According to Robin Usher and Richard Edwards, modern education, as the "dutiful child of the Enlightenment," has been "allotted a key role in the forming and shaping…
Siphonic Concepts Examined: A Carbon Dioxide Gas Siphon and Siphons in Vacuum
ERIC Educational Resources Information Center
Ramette, Joshua J.; Ramette, Richard W.
2011-01-01
Misconceptions of siphon action include assumptions that intermolecular attractions play a key role and that siphons will operate in a vacuum. These are belied by the siphoning of gaseous carbon dioxide and behaviour of siphons under reduced pressure. These procedures are suitable for classroom demonstrations. The principles of siphon action are…
Measurement Equivalence of the Autism Symptom Phenotype in Children and Youth
ERIC Educational Resources Information Center
Duku, Eric; Szatmari, Peter; Vaillancourt, Tracy; Georgiades, Stelios; Thompson, Ann; Liu, Xiao-Qing; Paterson, Andrew D.; Bennett, Terry
2013-01-01
Background: The Autism Diagnostic Interview-Revised (ADI-R) is a gold standard assessment of Autism Spectrum Disorder (ASD) symptoms and behaviours. A key underlying assumption of studies using the ADI-R is that it measures the same phenotypic constructs across different populations (i.e. males/females, younger/older, verbal/nonverbal). The…
Long-distance quantum key distribution with imperfect devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo Piparo, Nicoló; Razavi, Mohsen
2014-12-04
Quantum key distribution over probabilistic quantum repeaters is addressed. We compare, under practical assumptions, two such schemes in terms of their secure key generation rate per memory, R{sub QKD}. The two schemes under investigation are the one proposed by Duan et al. in [Nat. 414, 413 (2001)] and that of Sangouard et al. proposed in [Phys. Rev. A 76, 050301 (2007)]. We consider various sources of imperfections in the latter protocol, such as a nonzero double-photon probability for the source, dark count per pulse, channel loss and inefficiencies in photodetectors and memories, to find the rate for different nesting levels.more » We determine the maximum value of the double-photon probability beyond which it is not possible to share a secret key anymore. We find the crossover distance for up to three nesting levels. We finally compare the two protocols.« less
Measurement-Device-Independent Quantum Cryptography
NASA Astrophysics Data System (ADS)
Tang, Zhiyuan
Quantum key distribution (QKD) enables two legitimate parties to share a secret key even in the presence of an eavesdropper. The unconditional security of QKD is based on the fundamental laws of quantum physics. Original security proofs of QKD are based on a few assumptions, e.g., perfect single photon sources and perfect single-photon detectors. However, practical implementations of QKD systems do not fully comply with such assumptions due to technical limitations. The gap between theory and implementations leads to security loopholes in most QKD systems, and several attacks have been launched on sophisticated QKD systems. Particularly, the detectors have been found to be the most vulnerable part of QKD. Much effort has been put to build side-channel-free QKD systems. Solutions such as security patches and device-independent QKD have been proposed. However, the former are normally ad-hoc, and cannot close unidentified loopholes. The latter, while having the advantages of removing all assumptions on devices, is impractical to implement today. Measurement-device-independent QKD (MDI-QKD) turns out to be a promising solution to the security problem of QKD. In MDI-QKD, all security loopholes, including those yet-to-be discovered, have been removed from the detectors, the most critical part in QKD. In this thesis, we investigate issues related to the practical implementation and security of MDI-QKD. We first present a demonstration of polarization-encoding MDI-QKD. Taking finite key effect into account, we achieve a secret key rate of 0.005 bit per second (bps) over 10 km spooled telecom fiber, and a 1600-bit key is distributed. This work, together with other demonstrations, shows the practicality of MDI-QKD. Next we investigate a critical assumption of MDI-QKD: perfect state preparation. We apply the loss-tolerant QKD protocol and adapt it to MDI-QKD to quantify information leakage due to imperfect state preparation. We then present an experimental demonstration of MDI-QKD over 10 km and 40 km of spooled fiber, which for the first time considers the impact of inaccurate polarization state preparation on the secret key rate. This would not have been possible under previous security proofs, given the same amount of state preparation flaws.
Estimating psychiatric manpower requirements based on patients' needs.
Faulkner, L R; Goldman, C R
1997-05-01
To provide a better understanding of the complexities of estimating psychiatric manpower requirements, the authors describe several approaches to estimation and present a method based on patients' needs. A five-step method for psychiatric manpower estimation is used, with estimates of data pertinent to each step, to calculate the total psychiatric manpower requirements for the United States. The method is also used to estimate the hours of psychiatric service per patient per year that might be available under current psychiatric practice and under a managed care scenario. Depending on assumptions about data at each step in the method, the total psychiatric manpower requirements for the U.S. population range from 2,989 to 358,696 full-time-equivalent psychiatrists. The number of available hours of psychiatric service per patient per year is 14.1 hours under current psychiatric practice and 2.8 hours under the managed care scenario. The key to psychiatric manpower estimation lies in clarifying the assumptions that underlie the specific method used. Even small differences in assumptions mean large differences in estimates. Any credible manpower estimation process must include discussions and negotiations between psychiatrists, other clinicians, administrators, and patients and families to clarify the treatment needs of patients and the roles, responsibilities, and job description of psychiatrists.
Teenage Pregnancy and Sex and Relationship Education: Myths and (Mis)conceptions
ERIC Educational Resources Information Center
Vincent, Kerry
2007-01-01
This paper explores the role of sex and relationship education (SRE) in reducing teenage pregnancy rates. It critically examines some of the assumptions underlying the emphasis placed on SRE within the teenage pregnancy strategy ( SEU, 1999)--in particular, the view that ignorance of sexual matters plays a key part in teenage conception. An…
ERIC Educational Resources Information Center
Minne, Elizabeth Portman; Semrud-Clikeman, Margaret
2012-01-01
The key features of Asperger Syndrome (AS) and high functioning autism (HFA) include marked and sustained impairment in social interactions. A multi-session, small group program was developed to increase social perception based on the assumption perceptual or interpretive problems underlying these social difficulties. Additionally, the group…
The sensitivity of the ESA DELTA model
NASA Astrophysics Data System (ADS)
Martin, C.; Walker, R.; Klinkrad, H.
Long-term debris environment models play a vital role in furthering our understanding of the future debris environment, and in aiding the determination of a strategy to preserve the Earth orbital environment for future use. By their very nature these models have to make certain assumptions to enable informative future projections to be made. Examples of these assumptions include the projection of future traffic, including launch and explosion rates, and the methodology used to simulate break-up events. To ensure a sound basis for future projections, and consequently for assessing the effectiveness of various mitigation measures, it is essential that the sensitivity of these models to variations in key assumptions is examined. The DELTA (Debris Environment Long Term Analysis) model, developed by QinetiQ for the European Space Agency, allows the future projection of the debris environment throughout Earth orbit. Extensive analyses with this model have been performed under the auspices of the ESA Space Debris Mitigation Handbook and following the recent upgrade of the model to DELTA 3.0. This paper draws on these analyses to present the sensitivity of the DELTA model to changes in key model parameters and assumptions. Specifically the paper will address the variation in future traffic rates, including the deployment of satellite constellations, and the variation in the break-up model and criteria used to simulate future explosion and collision events.
A simple approach to nonlinear estimation of physical systems
Christakos, G.
1988-01-01
Recursive algorithms for estimating the states of nonlinear physical systems are developed. This requires some key hypotheses regarding the structure of the underlying processes. Members of this class of random processes have several desirable properties for the nonlinear estimation of random signals. An assumption is made about the form of the estimator, which may then take account of a wide range of applications. Under the above assumption, the estimation algorithm is mathematically suboptimal but effective and computationally attractive. It may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. To link theory with practice, some numerical results for a simulated system are presented, in which the responses from the proposed and the extended Kalman algorithms are compared. ?? 1988.
Sinclair, Brent J; Marshall, Katie E; Sewell, Mary A; Levesque, Danielle L; Willett, Christopher S; Slotsbo, Stine; Dong, Yunwei; Harley, Christopher D G; Marshall, David J; Helmuth, Brian S; Huey, Raymond B
2016-11-01
Thermal performance curves (TPCs), which quantify how an ectotherm's body temperature (T b ) affects its performance or fitness, are often used in an attempt to predict organismal responses to climate change. Here, we examine the key - but often biologically unreasonable - assumptions underlying this approach; for example, that physiology and thermal regimes are invariant over ontogeny, space and time, and also that TPCs are independent of previously experienced T b. We show how a critical consideration of these assumptions can lead to biologically useful hypotheses and experimental designs. For example, rather than assuming that TPCs are fixed during ontogeny, one can measure TPCs for each major life stage and incorporate these into stage-specific ecological models to reveal the life stage most likely to be vulnerable to climate change. Our overall goal is to explicitly examine the assumptions underlying the integration of TPCs with T b , to develop a framework within which empiricists can place their work within these limitations, and to facilitate the application of thermal physiology to understanding the biological implications of climate change. © 2016 John Wiley & Sons Ltd/CNRS.
Latent class instrumental variables: A clinical and biostatistical perspective
Baker, Stuart G.; Kramer, Barnett S.; Lindeman, Karen S.
2015-01-01
In some two-arm randomized trials, some participants receive the treatment assigned to the other arm as a result of technical problems, refusal of a treatment invitation, or a choice of treatment in an encouragement design. In some before-and-after studies, the availability of a new treatment changes from one time period to this next. Under assumptions that are often reasonable, the latent class instrumental variable (IV) method estimates the effect of treatment received in the aforementioned scenarios involving all-or-none compliance and all-or-none availability. Key aspects are four initial latent classes (sometimes called principal strata) based on treatment received if in each randomization group or time period, the exclusion restriction assumption (in which randomization group or time period is an instrumental variable), the monotonicity assumption (which drops an implausible latent class from the analysis), and the estimated effect of receiving treatment in one latent class (sometimes called efficacy, the local average treatment effect, or the complier average causal effect). Since its independent formulations in the biostatistics and econometrics literatures, the latent class IV method (which has no well-established name) has gained increasing popularity. We review the latent class IV method from a clinical and biostatistical perspective, focusing on underlying assumptions, methodological extensions, and applications in our fields of obstetrics and cancer research. PMID:26239275
Song, Fujian; Loke, Yoon K; Walsh, Tanya; Glenny, Anne-Marie; Eastwood, Alison J; Altman, Douglas G
2009-04-03
To investigate basic assumptions and other methodological problems in the application of indirect comparison in systematic reviews of competing healthcare interventions. Survey of published systematic reviews. Inclusion criteria Systematic reviews published between 2000 and 2007 in which an indirect approach had been explicitly used. Identified reviews were assessed for comprehensiveness of the literature search, method for indirect comparison, and whether assumptions about similarity and consistency were explicitly mentioned. The survey included 88 review reports. In 13 reviews, indirect comparison was informal. Results from different trials were naively compared without using a common control in six reviews. Adjusted indirect comparison was usually done using classic frequentist methods (n=49) or more complex methods (n=18). The key assumption of trial similarity was explicitly mentioned in only 40 of the 88 reviews. The consistency assumption was not explicit in most cases where direct and indirect evidence were compared or combined (18/30). Evidence from head to head comparison trials was not systematically searched for or not included in nine cases. Identified methodological problems were an unclear understanding of underlying assumptions, inappropriate search and selection of relevant trials, use of inappropriate or flawed methods, lack of objective and validated methods to assess or improve trial similarity, and inadequate comparison or inappropriate combination of direct and indirect evidence. Adequate understanding of basic assumptions underlying indirect and mixed treatment comparison is crucial to resolve these methodological problems. APPENDIX 1: PubMed search strategy. APPENDIX 2: Characteristics of identified reports. APPENDIX 3: Identified studies. References of included studies.
NASA Astrophysics Data System (ADS)
Sulman, B. N.; Moore, J.; Averill, C.; Abramoff, R. Z.; Bradford, M.; Classen, A. T.; Hartman, M. D.; Kivlin, S. N.; Luo, Y.; Mayes, M. A.; Morrison, E. W.; Riley, W. J.; Salazar, A.; Schimel, J.; Sridhar, B.; Tang, J.; Wang, G.; Wieder, W. R.
2016-12-01
Soil carbon (C) dynamics are crucial to understanding and predicting C cycle responses to global change and soil C modeling is a key tool for understanding these dynamics. While first order model structures have historically dominated this area, a recent proliferation of alternative model structures representing different assumptions about microbial activity and mineral protection is providing new opportunities to explore process uncertainties related to soil C dynamics. We conducted idealized simulations of soil C responses to warming and litter addition using models from five research groups that incorporated different sets of assumptions about processes governing soil C decomposition and stabilization. We conducted a meta-analysis of published warming and C addition experiments for comparison with simulations. Assumptions related to mineral protection and microbial dynamics drove strong differences among models. In response to C additions, some models predicted long-term C accumulation while others predicted transient increases that were counteracted by accelerating decomposition. In experimental manipulations, doubling litter addition did not change soil C stocks in studies spanning as long as two decades. This result agreed with simulations from models with strong microbial growth responses and limited mineral sorption capacity. In observations, warming initially drove soil C loss via increased CO2 production, but in some studies soil C rebounded and increased over decadal time scales. In contrast, all models predicted sustained C losses under warming. The disagreement with experimental results could be explained by physiological or community-level acclimation, or by warming-related changes in plant growth. In addition to the role of microbial activity, assumptions related to mineral sorption and protected C played a key role in driving long-term model responses. In general, simulations were similar in their initial responses to perturbations but diverged over decadal time scales. This suggests that more long-term soil experiments may be necessary to resolve important process uncertainties related to soil C storage. We also suggest future experiments examine how microbial activity responds to warming under a range of soil clay contents and in concert with changes in litter inputs.
Balancing Authority Cooperation Concepts - Intra-Hour Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunsaker, Matthew; Samaan, Nader; Milligan, Michael
2013-03-29
The overall objective of this study was to understand, on an Interconnection-wide basis, the effects intra-hour scheduling compared to hourly scheduling. Moreover, the study sought to understand how the benefits of intra-hour scheduling would change by altering the input assumptions in different scenarios. This report describes results of three separate scenarios with differing key assumptions and comparing the production costs between hourly scheduling and 10-minute scheduling performance. The different scenarios were chosen to provide insight into how the estimated benefits might change by altering input assumptions. Several key assumptions were different in the three scenarios, however most assumptions were similarmore » and/or unchanged among the scenarios.« less
Finite-key analysis for measurement-device-independent quantum key distribution.
Curty, Marcos; Xu, Feihu; Cui, Wei; Lim, Charles Ci Wen; Tamaki, Kiyoshi; Lo, Hoi-Kwong
2014-04-29
Quantum key distribution promises unconditionally secure communications. However, as practical devices tend to deviate from their specifications, the security of some practical systems is no longer valid. In particular, an adversary can exploit imperfect detectors to learn a large part of the secret key, even though the security proof claims otherwise. Recently, a practical approach--measurement-device-independent quantum key distribution--has been proposed to solve this problem. However, so far its security has only been fully proven under the assumption that the legitimate users of the system have unlimited resources. Here we fill this gap and provide a rigorous security proof against general attacks in the finite-key regime. This is obtained by applying large deviation theory, specifically the Chernoff bound, to perform parameter estimation. For the first time we demonstrate the feasibility of long-distance implementations of measurement-device-independent quantum key distribution within a reasonable time frame of signal transmission.
An eCK-Secure Authenticated Key Exchange Protocol without Random Oracles
NASA Astrophysics Data System (ADS)
Moriyama, Daisuke; Okamoto, Tatsuaki
This paper presents a (PKI-based) two-pass authenticated key exchange (AKE) protocol that is secure in the extended Canetti-Krawczyk (eCK) security model. The security of the proposed protocol is proven without random oracles (under three assumptions), and relies on no implementation techniques such as a trick by LaMacchia, Lauter and Mityagin (so-called the NAXOS trick). Since an AKE protocol that is eCK-secure under a NAXOS-like implementation trick will be no more eCK-secure if some realistic information leakage occurs through side-channel attacks, it has been an important open problem how to realize an eCK-secure AKE protocol without using the NAXOS tricks (and without random oracles).
Fair market value: taking a proactive approach.
Romero, Richard A
2008-04-01
A valuation report assessing the fair market value of a contractual arrangement should include: A description of the company, entity, or circumstance being valued. Analysis of general economic conditions that are expected to affect the enterprise. Evaluation of economic conditions in the medical services industry. Explanation of the various valuation approaches that were considered. Documentation of key underlying assumptions, including revenue and expense projections, projected profit, and ROI.
Alternative Fuels for Military Applications
2011-01-01
HEALTH CARE INFRASTRUCTURE AND TRANSPORTATION INTERNATIONAL AFFAIRS LAW AND BUSINESS NATIONAL SECURITY POPULATION AND AGING PUBLIC SAFETY SCIENCE AND...refinery, UOP LLC reports a hydrogen requirement of 2.5 to 3.8 percent by weight and capital costs (inside battery limits) of between $6.00 and $11.00...environmental permitting requirements. 15 The key assumptions underlying this cost estimate are that (1) total capital costs are twice ISBL capital costs
From Global Challenge to Local Efficacy: Rediscovering Human Agency in Learning for Survival
ERIC Educational Resources Information Center
Percy-Smith, Barry
2010-01-01
There is an assumption underlying education for sustainable development that all we need do is learn the skills and knowledge to live sustainably. Yet, many already know the issues and know we "should" act, but we don't. This article argues that a key part of the problem is that we live according to myths and daydreams perpetuated by a…
With Grid Flexibility, California can Slash Emissions while Limiting Curtailment
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-02-01
California can achieve a 50% percent reduction in CO2 levels by 2030 in the electric sector under a wide variety of scenarios and assumptions, according to the Low Carbon Grid Study: Analysis of a 50% Emission Reduction in California, published in January 2016 by the Department of Energy's National Renewable Energy Laboratory. This document summarizes key findings and analysis from the study.
Identification of differences in health impact modelling of salt reduction
Geleijnse, Johanna M.; van Raaij, Joop M. A.; Cappuccio, Francesco P.; Cobiac, Linda C.; Scarborough, Peter; Nusselder, Wilma J.; Jaccard, Abbygail; Boshuizen, Hendriek C.
2017-01-01
We examined whether specific input data and assumptions explain outcome differences in otherwise comparable health impact assessment models. Seven population health models estimating the impact of salt reduction on morbidity and mortality in western populations were compared on four sets of key features, their underlying assumptions and input data. Next, assumptions and input data were varied one by one in a default approach (the DYNAMO-HIA model) to examine how it influences the estimated health impact. Major differences in outcome were related to the size and shape of the dose-response relation between salt and blood pressure and blood pressure and disease. Modifying the effect sizes in the salt to health association resulted in the largest change in health impact estimates (33% lower), whereas other changes had less influence. Differences in health impact assessment model structure and input data may affect the health impact estimate. Therefore, clearly defined assumptions and transparent reporting for different models is crucial. However, the estimated impact of salt reduction was substantial in all of the models used, emphasizing the need for public health actions. PMID:29182636
Latent class instrumental variables: a clinical and biostatistical perspective.
Baker, Stuart G; Kramer, Barnett S; Lindeman, Karen S
2016-01-15
In some two-arm randomized trials, some participants receive the treatment assigned to the other arm as a result of technical problems, refusal of a treatment invitation, or a choice of treatment in an encouragement design. In some before-and-after studies, the availability of a new treatment changes from one time period to this next. Under assumptions that are often reasonable, the latent class instrumental variable (IV) method estimates the effect of treatment received in the aforementioned scenarios involving all-or-none compliance and all-or-none availability. Key aspects are four initial latent classes (sometimes called principal strata) based on treatment received if in each randomization group or time period, the exclusion restriction assumption (in which randomization group or time period is an instrumental variable), the monotonicity assumption (which drops an implausible latent class from the analysis), and the estimated effect of receiving treatment in one latent class (sometimes called efficacy, the local average treatment effect, or the complier average causal effect). Since its independent formulations in the biostatistics and econometrics literatures, the latent class IV method (which has no well-established name) has gained increasing popularity. We review the latent class IV method from a clinical and biostatistical perspective, focusing on underlying assumptions, methodological extensions, and applications in our fields of obstetrics and cancer research. Copyright © 2015 John Wiley & Sons, Ltd.
Does bad inference drive out good?
Marozzi, Marco
2015-07-01
The (mis)use of statistics in practice is widely debated, and a field where the debate is particularly active is medicine. Many scholars emphasize that a large proportion of published medical research contains statistical errors. It has been noted that top class journals like Nature Medicine and The New England Journal of Medicine publish a considerable proportion of papers that contain statistical errors and poorly document the application of statistical methods. This paper joins the debate on the (mis)use of statistics in the medical literature. Even though the validation process of a statistical result may be quite elusive, a careful assessment of underlying assumptions is central in medicine as well as in other fields where a statistical method is applied. Unfortunately, a careful assessment of underlying assumptions is missing in many papers, including those published in top class journals. In this paper, it is shown that nonparametric methods are good alternatives to parametric methods when the assumptions for the latter ones are not satisfied. A key point to solve the problem of the misuse of statistics in the medical literature is that all journals have their own statisticians to review the statistical method/analysis section in each submitted paper. © 2015 Wiley Publishing Asia Pty Ltd.
Kim, Sangwoo; Choi, Seongdae; Oh, Eunho; Byun, Junghwan; Kim, Hyunjong; Lee, Byeongmoon; Lee, Seunghwan; Hong, Yongtaek
2016-01-01
A percolation theory based on variation of conductive filler fraction has been widely used to explain the behavior of conductive composite materials under both small and large deformation conditions. However, it typically fails in properly analyzing the materials under the large deformation since the assumption may not be valid in such a case. Therefore, we proposed a new three-dimensional percolation theory by considering three key factors: nonlinear elasticity, precisely measured strain-dependent Poisson’s ratio, and strain-dependent percolation threshold. Digital image correlation (DIC) method was used to determine actual Poisson’s ratios at various strain levels, which were used to accurately estimate variation of conductive filler volume fraction under deformation. We also adopted strain-dependent percolation threshold caused by the filler re-location with deformation. When three key factors were considered, electrical performance change was accurately analyzed for composite materials with both isotropic and anisotropic mechanical properties. PMID:27694856
Perea, Manuel; Marcet, Ana; Lozano, Mario; Gomez, Pablo
2018-05-29
One of the key assumptions of the masked priming lexical decision task (LDT) is that primes are processed without requiring attentional resources. Here, we tested this assumption by presenting a dual-task manipulation to increase memory load and measure the change in masked identity priming on the targets in the LDT. If masked priming does not require attentional resources, increased memory load should have no influence on the magnitude of the observed identity priming effects. We conducted two LDT experiments, using a within-subjects design, to investigate the effect of memory load (via a concurrent matching task Experiment 1 and a concurrent search task in Experiment 2) on masked identity priming. Results showed that the magnitude of masked identity priming on word targets was remarkably similar under high and low memory load. Thus, these experiments provide empirical evidence for the automaticity assumption of masked identity priming in the LDT.
Time-asymmetric photovoltaics.
Green, Martin A
2012-11-14
Limits upon photovoltaic energy conversion efficiency generally are formulated using the detailed balance approach of Shockley and Queisser. One key underlying assumption is invariance upon time reversal, underpinning detailed balance itself. Recent proposals for compact, layered, time-asymmetrical, magneto-optical devices make their routine implementation likely. It is shown that such time-asymmetry can alter the relationship between solar cell emission and absorption assumed in the Shockley-Queisser approach, allowing generally accepted photovoltaic performance limits to be exceeded.
Probabilistic Integrated Assessment of ``Dangerous'' Climate Change
NASA Astrophysics Data System (ADS)
Mastrandrea, Michael D.; Schneider, Stephen H.
2004-04-01
Climate policy decisions are being made despite layers of uncertainty. Such decisions directly influence the potential for ``dangerous anthropogenic interference with the climate system.'' We mapped a metric for this concept, based on Intergovernmental Panel on Climate Change assessment of climate impacts, onto probability distributions of future climate change produced from uncertainty in key parameters of the coupled social-natural system-climate sensitivity, climate damages, and discount rate. Analyses with a simple integrated assessment model found that, under midrange assumptions, endogenously calculated, optimal climate policy controls can reduce the probability of dangerous anthropogenic interference from ~45% under minimal controls to near zero.
On the security of semi-device-independent QKD protocols
NASA Astrophysics Data System (ADS)
Chaturvedi, Anubhav; Ray, Maharshi; Veynar, Ryszard; Pawłowski, Marcin
2018-06-01
While fully device-independent security in (BB84-like) prepare-and-measure quantum key distribution (QKD) is impossible, it can be guaranteed against individual attacks in a semi-device-independent (SDI) scenario, wherein no assumptions are made on the characteristics of the hardware used except for an upper bound on the dimension of the communicated system. Studying security under such minimal assumptions is especially relevant in the context of the recent quantum hacking attacks wherein the eavesdroppers can not only construct the devices used by the communicating parties but are also able to remotely alter their behavior. In this work, we study the security of a SDIQKD protocol based on the prepare-and-measure quantum implementation of a well-known cryptographic primitive, the random access code (RAC). We consider imperfect detectors and establish the critical values of the security parameters (the observed success probability of the RAC and the detection efficiency) required for guaranteeing security against eavesdroppers with and without quantum memory. Furthermore, we suggest a minimal characterization of the preparation device in order to lower the requirements for establishing a secure key.
Tag-KEM from Set Partial Domain One-Way Permutations
NASA Astrophysics Data System (ADS)
Abe, Masayuki; Cui, Yang; Imai, Hideki; Kurosawa, Kaoru
Recently a framework called Tag-KEM/DEM was introduced to construct efficient hybrid encryption schemes. Although it is known that generic encode-then-encrypt construction of chosen ciphertext secure public-key encryption also applies to secure Tag-KEM construction and some known encoding method like OAEP can be used for this purpose, it is worth pursuing more efficient encoding method dedicated for Tag-KEM construction. This paper proposes an encoding method that yields efficient Tag-KEM schemes when combined with set partial one-way permutations such as RSA and Rabin's encryption scheme. To our knowledge, this leads to the most practical hybrid encryption scheme of this type. We also present an efficient Tag-KEM which is CCA-secure under general factoring assumption rather than Blum factoring assumption.
Assumptions to the Annual Energy Outlook
2017-01-01
This report presents the major assumptions of the National Energy Modeling System (NEMS) used to generate the projections in the Annual Energy Outlook, including general features of the model structure, assumptions concerning energy markets, and the key input data and parameters that are the most significant in formulating the model results.
Charles, Cathy; Gafni, Amiram; Whelan, Tim; O'Brien, Mary Ann
2006-11-01
In this paper we discuss the influence of culture on the process of treatment decision-making, and in particular, shared treatment decision-making in the physician-patient encounter. We explore two key issues: (1) the meaning of culture and the ways that it can affect treatment decision-making; (2) cultural issues and assumptions underlying the development and use of treatment decision aids. This is a conceptual paper. Based on our knowledge and reading of the key literature in the treatment decision-making field, we looked for written examples where cultural influences were taken into account when discussing the physician-patient encounter and when designing instruments (decision aids) to help patients participate in making decisions. Our assessment of the situation is that to date, and with some recent exceptions, research in the above areas has not been culturally sensitive. We suggest that more research attention should be focused on exploring potential cultural variations in the meaning of and preferences for shared decision-making as well as on the applicability across cultural groups of decision aids developed to facilitate patient participation in treatment decision-making with physicians. Both patients and physicians need to be aware of the cultural assumptions underlying the development and use of decision aids and assess their cultural sensitivity to the needs and preferences of patients in diverse cultural groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Paul M., E-mail: lighthouse@abdn.ac.uk; Hastie, Gordon D., E-mail: gdh10@st-andrews.ac.uk; Nedwell, Jeremy, E-mail: Jeremy.Nedwell@subacoustech.com
2013-11-15
Offshore wind farm developments may impact protected marine mammal populations, requiring appropriate assessment under the EU Habitats Directive. We describe a framework developed to assess population level impacts of disturbance from piling noise on a protected harbour seal population in the vicinity of proposed wind farm developments in NE Scotland. Spatial patterns of seal distribution and received noise levels are integrated with available data on the potential impacts of noise to predict how many individuals are displaced or experience auditory injury. Expert judgement is used to link these impacts to changes in vital rates and applied to population models thatmore » compare population changes under baseline and construction scenarios over a 25 year period. We use published data and hypothetical piling scenarios to illustrate how the assessment framework has been used to support environmental assessments, explore the sensitivity of the framework to key assumptions, and discuss its potential application to other populations of marine mammals. -- Highlights: • We develop a framework to support Appropriate Assessment for harbour seal populations. • We assessed potential impacts of wind farm construction noise. • Data on distribution of seals and noise were used to predict effects on individuals. • Expert judgement linked these impacts to vital rates to model population change. • We explore the sensitivity of the framework to key assumptions and uncertainties.« less
Merger mania. What will a merger mean to you?
Kennedy, M M
2001-01-01
Almost as worrisome as job tenure to survivors of corporate mergers is whether they will be able to work under a completely new set of assumptions. What effect will the merger of two different corporate cultures have on effectiveness, satisfaction and promotability? Even people who believe they know the partner's culture almost as well as their own are often surprised at what happens after a merger takes place. Find out what's likely to happen in a merger by asking a few key questions.
Can We Falsify the Consciousness-Causes-Collapse Hypothesis in Quantum Mechanics?
NASA Astrophysics Data System (ADS)
de Barros, J. Acacio; Oas, Gary
2017-10-01
In this paper we examine some proposals to disprove the hypothesis that the interaction between mind and matter causes the collapse of the wave function, showing that such proposals are fundamentally flawed. We then describe a general experimental setup retaining the key features of the ones examined, and show that even a more general case is inadequate to disprove the mind-matter collapse hypothesis. Finally, we use our setup provided to argue that, under some reasonable assumptions about consciousness, such hypothesis is unfalsifiable.
All quantum observables in a hidden-variable model must commute simultaneously
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malley, James D.
Under a standard set of assumptions for a hidden-variable model for quantum events we show that all observables must commute simultaneously. This seems to be an ultimate statement about the inapplicability of the usual hidden-variable model for quantum events. And, despite Bell's complaint that a key condition of von Neumann's was quite unrealistic, we show that these conditions, under which von Neumann produced the first no-go proof, are entirely equivalent to those introduced by Bell and Kochen and Specker. As these conditions are also equivalent to those under which the Bell-Clauster-Horne inequalities are derived, we see that the experimental violationsmore » of the inequalities demonstrate only that quantum observables do not commute.« less
Efficient Fair Exchange from Identity-Based Signature
NASA Astrophysics Data System (ADS)
Yum, Dae Hyun; Lee, Pil Joong
A fair exchange scheme is a protocol by which two parties Alice and Bob exchange items or services without allowing either party to gain advantages by quitting prematurely or otherwise misbehaving. To this end, modern cryptographic solutions use a semi-trusted arbitrator who involves only in cases where one party attempts to cheat or simply crashes. We call such a fair exchange scheme optimistic. When no registration is required between the signer and the arbitrator, we say that the fair exchange scheme is setup free. To date, the setup-free optimist fair exchange scheme under the standard RSA assumption was only possible from the generic construction of [12], which uses ring signatures. In this paper, we introduce a new setup-free optimistic fair exchange scheme under the standard RSA assumption. Our scheme uses the GQ identity-based signature and is more efficient than [12]. The construction can also be generalized by using various identity-based signature schemes. Our main technique is to allow each user to choose his (or her) own “random” public key in the identitybased signature scheme.
Neural response to reward anticipation under risk is nonlinear in probabilities.
Hsu, Ming; Krajbich, Ian; Zhao, Chen; Camerer, Colin F
2009-02-18
A widely observed phenomenon in decision making under risk is the apparent overweighting of unlikely events and the underweighting of nearly certain events. This violates standard assumptions in expected utility theory, which requires that expected utility be linear (objective) in probabilities. Models such as prospect theory have relaxed this assumption and introduced the notion of a "probability weighting function," which captures the key properties found in experimental data. This study reports functional magnetic resonance imaging (fMRI) data that neural response to expected reward is nonlinear in probabilities. Specifically, we found that activity in the striatum during valuation of monetary gambles are nonlinear in probabilities in the pattern predicted by prospect theory, suggesting that probability distortion is reflected at the level of the reward encoding process. The degree of nonlinearity reflected in individual subjects' decisions is also correlated with striatal activity across subjects. Our results shed light on the neural mechanisms of reward processing, and have implications for future neuroscientific studies of decision making involving extreme tails of the distribution, where probability weighting provides an explanation for commonly observed behavioral anomalies.
Performance management in healthcare: a critical analysis.
Hewko, Sarah J; Cummings, Greta G
2016-01-01
Purpose - The purpose of this paper is to explore the underlying theoretical assumptions and implications of current micro-level performance management and evaluation (PME) practices, specifically within health-care organizations. PME encompasses all activities that are designed and conducted to align employee outputs with organizational goals. Design/methodology/approach - PME, in the context of healthcare, is analyzed through the lens of critical theory. Specifically, Habermas' theory of communicative action is used to highlight some of the questions that arise in looking critically at PME. To provide a richer definition of key theoretical concepts, the authors conducted a preliminary, exploratory hermeneutic semantic analysis of the key words "performance" and "management" and of the term "performance management". Findings - Analysis reveals that existing micro-level PME systems in health-care organizations have the potential to create a workforce that is compliant, dependent, technically oriented and passive, and to support health-care systems in which inequalities and power imbalances are perpetually reinforced. Practical implications - At a time when the health-care system is under increasing pressure to provide high-quality, affordable services with fewer resources, it may be wise to investigate new sector-specific ways of evaluating and managing performance. Originality/value - In this paper, written for health-care leaders and health human resource specialists, the theoretical assumptions and implications of current PME practices within health-care organizations are explored. It is hoped that readers will be inspired to support innovative PME practices within their organizations that encourage peak performance among health-care professionals.
Great Expectations: Is there Evidence for Predictive Coding in Auditory Cortex?
Heilbron, Micha; Chait, Maria
2017-08-04
Predictive coding is possibly one of the most influential, comprehensive, and controversial theories of neural function. While proponents praise its explanatory potential, critics object that key tenets of the theory are untested or even untestable. The present article critically examines existing evidence for predictive coding in the auditory modality. Specifically, we identify five key assumptions of the theory and evaluate each in the light of animal, human and modeling studies of auditory pattern processing. For the first two assumptions - that neural responses are shaped by expectations and that these expectations are hierarchically organized - animal and human studies provide compelling evidence. The anticipatory, predictive nature of these expectations also enjoys empirical support, especially from studies on unexpected stimulus omission. However, for the existence of separate error and prediction neurons, a key assumption of the theory, evidence is lacking. More work exists on the proposed oscillatory signatures of predictive coding, and on the relation between attention and precision. However, results on these latter two assumptions are mixed or contradictory. Looking to the future, more collaboration between human and animal studies, aided by model-based analyses will be needed to test specific assumptions and implementations of predictive coding - and, as such, help determine whether this popular grand theory can fulfill its expectations. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Scarani, Valerio; Renner, Renato
2008-05-23
We derive a bound for the security of quantum key distribution with finite resources under one-way postprocessing, based on a definition of security that is composable and has an operational meaning. While our proof relies on the assumption of collective attacks, unconditional security follows immediately for standard protocols such as Bennett-Brassard 1984 and six-states protocol. For single-qubit implementations of such protocols, we find that the secret key rate becomes positive when at least N approximately 10(5) signals are exchanged and processed. For any other discrete-variable protocol, unconditional security can be obtained using the exponential de Finetti theorem, but the additional overhead leads to very pessimistic estimates.
Overarching framework for data-based modelling
NASA Astrophysics Data System (ADS)
Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco
2014-02-01
One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.
Zipf's word frequency law in natural language: a critical review and future directions.
Piantadosi, Steven T
2014-10-01
The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf's law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf's law and are then used to evaluate many of the theoretical explanations of Zipf's law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf's law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data.
Effects of distributed database modeling on evaluation of transaction rollbacks
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.
Effects of distributed database modeling on evaluation of transaction rollbacks
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. Here, researchers investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks in a partitioned distributed database system. The researchers developed six probabilistic models and expressions for the number of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. It was concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughput is also grossly undermined when such models are employed.
NASA Astrophysics Data System (ADS)
Lekmine, Greg; Sookhak Lari, Kaveh; Johnston, Colin D.; Bastow, Trevor P.; Rayner, John L.; Davis, Greg B.
2017-01-01
Understanding dissolution dynamics of hazardous compounds from complex gasoline mixtures is a key to long-term predictions of groundwater risks. The aim of this study was to investigate if the local equilibrium assumption for BTEX and TMBs (trimethylbenzenes) dissolution was valid under variable saturation in two dimensional flow conditions and evaluate the impact of local heterogeneities when equilibrium is verified at the scale of investigation. An initial residual gasoline saturation was established over the upper two-thirds of a water saturated sand pack. A constant horizontal pore velocity was maintained and water samples were recovered across 38 sampling ports over 141 days. Inside the residual NAPL zone, BTEX and TMBs dissolution curves were in agreement with the TMVOC model based on the local equilibrium assumption. Results compared to previous numerical studies suggest the presence of small scale dissolution fingering created perpendicular to the horizontal dissolution front, mainly triggered by heterogeneities in the medium structure and the local NAPL residual saturation. In the transition zone, TMVOC was able to represent a range of behaviours exhibited by the data, confirming equilibrium or near-equilibrium dissolution at the scale of investigation. The model locally showed discrepancies with the most soluble compounds, i.e. benzene and toluene, due to local heterogeneities exhibiting that at lower scale flow bypassing and channelling may have occurred. In these conditions mass transfer rates were still high enough to fall under the equilibrium assumption in TMVOC at the scale of investigation. Comparisons with other models involving upscaled mass transfer rates demonstrated that such approximations with TMVOC could lead to overestimate BTEX dissolution rates and underestimate the total remediation time.
Lekmine, Greg; Sookhak Lari, Kaveh; Johnston, Colin D; Bastow, Trevor P; Rayner, John L; Davis, Greg B
2017-01-01
Understanding dissolution dynamics of hazardous compounds from complex gasoline mixtures is a key to long-term predictions of groundwater risks. The aim of this study was to investigate if the local equilibrium assumption for BTEX and TMBs (trimethylbenzenes) dissolution was valid under variable saturation in two dimensional flow conditions and evaluate the impact of local heterogeneities when equilibrium is verified at the scale of investigation. An initial residual gasoline saturation was established over the upper two-thirds of a water saturated sand pack. A constant horizontal pore velocity was maintained and water samples were recovered across 38 sampling ports over 141days. Inside the residual NAPL zone, BTEX and TMBs dissolution curves were in agreement with the TMVOC model based on the local equilibrium assumption. Results compared to previous numerical studies suggest the presence of small scale dissolution fingering created perpendicular to the horizontal dissolution front, mainly triggered by heterogeneities in the medium structure and the local NAPL residual saturation. In the transition zone, TMVOC was able to represent a range of behaviours exhibited by the data, confirming equilibrium or near-equilibrium dissolution at the scale of investigation. The model locally showed discrepancies with the most soluble compounds, i.e. benzene and toluene, due to local heterogeneities exhibiting that at lower scale flow bypassing and channelling may have occurred. In these conditions mass transfer rates were still high enough to fall under the equilibrium assumption in TMVOC at the scale of investigation. Comparisons with other models involving upscaled mass transfer rates demonstrated that such approximations with TMVOC could lead to overestimate BTEX dissolution rates and underestimate the total remediation time. Copyright © 2016. Published by Elsevier B.V.
Sabatelli, Lorenzo
2016-01-01
Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis. Key parameters are: the elasticity of the marginal utility of income, and the average budget share. The proposed method can be used to forecast the direct and indirect impact of price changes and of financial instruments of policy using available estimates of the income elasticity of demand. PMID:26999511
On firework blasts and qualitative parameter dependency.
Zohdi, T I
2016-01-01
In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given.
Sabatelli, Lorenzo
2016-01-01
Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis. Key parameters are: the elasticity of the marginal utility of income, and the average budget share. The proposed method can be used to forecast the direct and indirect impact of price changes and of financial instruments of policy using available estimates of the income elasticity of demand.
On firework blasts and qualitative parameter dependency
Zohdi, T. I.
2016-01-01
In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given. PMID:26997903
NASA Astrophysics Data System (ADS)
Crago, Richard; Qualls, Russell; Szilagyi, Jozsef; Huntington, Justin
2017-07-01
Ma and Zhang (2017) note a concern they have with our rescaled Complementary Relationship (CR) for land surface evaporation when daily average wind speeds are very low (perhaps less than 1 m/s). We discuss conditions and specific formulations that lead to this concern, but ultimately argue that under these conditions, a key assumption behind the CR itself may not be satisfied at the daily time scale. Thus, careful consideration of the reliability of the CR is needed when wind speeds are very low.
SM Higgs properties measurement at ATLAS
NASA Astrophysics Data System (ADS)
Murray, William
2010-02-01
The discovery of a new particle in the Higgs searches being prepared for LHC will not guarantee that the Standard Model Higgs boson has been seen. This paper discusses the possibilities for measuring the spin, parity and couplings of the particle, under the assumption that it does in fact behave like the Standard Model Higgs. The key question, which cannot alas be answered, is: if it looks like a dog, and barks like a dog, how much of the DNA must we analyse to be sure that it is a dog?
1988-11-01
civilian sector, the physician plays a key role in health care for he is both the customer and the person by which revenue is produced. An economic view of... health care predicts that the more patients that can be seen In a given amount of time, the greater the revenue produced, considering the resources...with the patient . Assumption 1. T’he quality of health care provided by all physiciarns under the study will be equivalent, similar and satisiactory
SM Higgs properties measurement at ATLAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, William
2010-02-10
The discovery of a new particle in the Higgs searches being prepared for LHC will not guarantee that the Standard Model Higgs boson has been seen. This paper discusses the possibilities for measuring the spin, parity and couplings of the particle, under the assumption that it does in fact behave like the Standard Model Higgs. The key question, which cannot alas be answered, is: if it looks like a dog, and barks like a dog, how much of the DNA must we analyse to be sure that it is a dog?
Considerations for the design, analysis and presentation of in vivo studies.
Ranstam, J; Cook, J A
2017-03-01
To describe, explain and give practical suggestions regarding important principles and key methodological challenges in the study design, statistical analysis, and reporting of results from in vivo studies. Pre-specifying endpoints and analysis, recognizing the common underlying assumption of statistically independent observations, performing sample size calculations, and addressing multiplicity issues are important parts of an in vivo study. A clear reporting of results and informative graphical presentations of data are other important parts. Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Decoy-state quantum key distribution with a leaky source
NASA Astrophysics Data System (ADS)
Tamaki, Kiyoshi; Curty, Marcos; Lucamarini, Marco
2016-06-01
In recent years, there has been a great effort to prove the security of quantum key distribution (QKD) with a minimum number of assumptions. Besides its intrinsic theoretical interest, this would allow for larger tolerance against device imperfections in the actual implementations. However, even in this device-independent scenario, one assumption seems unavoidable, that is, the presence of a protected space devoid of any unwanted information leakage in which the legitimate parties can privately generate, process and store their classical data. In this paper we relax this unrealistic and hardly feasible assumption and introduce a general formalism to tackle the information leakage problem in most of existing QKD systems. More specifically, we prove the security of optical QKD systems using phase and intensity modulators in their transmitters, which leak the setting information in an arbitrary manner. We apply our security proof to cases of practical interest and show key rates similar to those obtained in a perfectly shielded environment. Our work constitutes a fundamental step forward in guaranteeing implementation security of quantum communication systems.
Comparative Analysis of Modeling Studies on China's Future Energy and Emissions Outlook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Nina; Zhou, Nan; Fridley, David
The past decade has seen the development of various scenarios describing long-term patterns of future Greenhouse Gas (GHG) emissions, with each new approach adding insights to our understanding of the changing dynamics of energy consumption and aggregate future energy trends. With the recent growing focus on China's energy use and emission mitigation potential, a range of Chinese outlook models have been developed across different institutions including in China's Energy Research Institute's 2050 China Energy and CO2 Emissions Report, McKinsey & Co's China's Green Revolution report, the UK Sussex Energy Group and Tyndall Centre's China's Energy Transition report, and the China-specificmore » section of the IEA World Energy Outlook 2009. At the same time, the China Energy Group at Lawrence Berkeley National Laboratory (LBNL) has developed a bottom-up, end-use energy model for China with scenario analysis of energy and emission pathways out to 2050. A robust and credible energy and emission model will play a key role in informing policymakers by assessing efficiency policy impacts and understanding the dynamics of future energy consumption and energy saving and emission reduction potential. This is especially true for developing countries such as China, where uncertainties are greater while the economy continues to undergo rapid growth and industrialization. A slightly different assumption or storyline could result in significant discrepancies among different model results. Therefore, it is necessary to understand the key models in terms of their scope, methodologies, key driver assumptions and the associated findings. A comparative analysis of LBNL's energy end-use model scenarios with the five above studies was thus conducted to examine similarities and divergences in methodologies, scenario storylines, macroeconomic drivers and assumptions as well as aggregate energy and emission scenario results. Besides directly tracing different energy and CO{sub 2} savings potential back to the underlying strategies and combination of efficiency and abatement policy instruments represented by each scenario, this analysis also had other important but often overlooked findings.« less
McGowan, Conor P.; Gardner, Beth
2013-01-01
Estimating productivity for precocial species can be difficult because young birds leave their nest within hours or days of hatching and detectability thereafter can be very low. Recently, a method for using a modified catch-curve to estimate precocial chick daily survival for age based count data was presented using Piping Plover (Charadrius melodus) data from the Missouri River. However, many of the assumptions of the catch-curve approach were not fully evaluated for precocial chicks. We developed a simulation model to mimic Piping Plovers, a fairly representative shorebird, and age-based count-data collection. Using the simulated data, we calculated daily survival estimates and compared them with the known daily survival rates from the simulation model. We conducted these comparisons under different sampling scenarios where the ecological and statistical assumptions had been violated. Overall, the daily survival estimates calculated from the simulated data corresponded well with true survival rates of the simulation. Violating the accurate aging and the independence assumptions did not result in biased daily survival estimates, whereas unequal detection for younger or older birds and violating the birth death equilibrium did result in estimator bias. Assuring that all ages are equally detectable and timing data collection to approximately meet the birth death equilibrium are key to the successful use of this method for precocial shorebirds.
Cooperation, psychological game theory, and limitations of rationality in social interaction.
Colman, Andrew M
2003-04-01
Rational choice theory enjoys unprecedented popularity and influence in the behavioral and social sciences, but it generates intractable problems when applied to socially interactive decisions. In individual decisions, instrumental rationality is defined in terms of expected utility maximization. This becomes problematic in interactive decisions, when individuals have only partial control over the outcomes, because expected utility maximization is undefined in the absence of assumptions about how the other participants will behave. Game theory therefore incorporates not only rationality but also common knowledge assumptions, enabling players to anticipate their co-players' strategies. Under these assumptions, disparate anomalies emerge. Instrumental rationality, conventionally interpreted, fails to explain intuitively obvious features of human interaction, yields predictions starkly at variance with experimental findings, and breaks down completely in certain cases. In particular, focal point selection in pure coordination games is inexplicable, though it is easily achieved in practice; the intuitively compelling payoff-dominance principle lacks rational justification; rationality in social dilemmas is self-defeating; a key solution concept for cooperative coalition games is frequently inapplicable; and rational choice in certain sequential games generates contradictions. In experiments, human players behave more cooperatively and receive higher payoffs than strict rationality would permit. Orthodox conceptions of rationality are evidently internally deficient and inadequate for explaining human interaction. Psychological game theory, based on nonstandard assumptions, is required to solve these problems, and some suggestions along these lines have already been put forward.
Medlyn, Belinda E; De Kauwe, Martin G; Zaehle, Sönke; Walker, Anthony P; Duursma, Remko A; Luus, Kristina; Mishurov, Mikhail; Pak, Bernard; Smith, Benjamin; Wang, Ying-Ping; Yang, Xiaojuan; Crous, Kristine Y; Drake, John E; Gimeno, Teresa E; Macdonald, Catriona A; Norby, Richard J; Power, Sally A; Tjoelker, Mark G; Ellsworth, David S
2016-08-01
The response of terrestrial ecosystems to rising atmospheric CO2 concentration (Ca ), particularly under nutrient-limited conditions, is a major uncertainty in Earth System models. The Eucalyptus Free-Air CO2 Enrichment (EucFACE) experiment, recently established in a nutrient- and water-limited woodland presents a unique opportunity to address this uncertainty, but can best do so if key model uncertainties have been identified in advance. We applied seven vegetation models, which have previously been comprehensively assessed against earlier forest FACE experiments, to simulate a priori possible outcomes from EucFACE. Our goals were to provide quantitative projections against which to evaluate data as they are collected, and to identify key measurements that should be made in the experiment to allow discrimination among alternative model assumptions in a postexperiment model intercomparison. Simulated responses of annual net primary productivity (NPP) to elevated Ca ranged from 0.5 to 25% across models. The simulated reduction of NPP during a low-rainfall year also varied widely, from 24 to 70%. Key processes where assumptions caused disagreement among models included nutrient limitations to growth; feedbacks to nutrient uptake; autotrophic respiration; and the impact of low soil moisture availability on plant processes. Knowledge of the causes of variation among models is now guiding data collection in the experiment, with the expectation that the experimental data can optimally inform future model improvements. © 2016 John Wiley & Sons Ltd.
Medlyn, Belinda E.; De Kauwe, Martin G.; Zaehle, Sönke; ...
2016-05-09
One major uncertainty in Earth System models is the response of terrestrial ecosystems to rising atmospheric CO 2 concentration (Ca), particularly under nutrient-lim- ited conditions. The Eucalyptus Free-Air CO 2 Enrichment (EucFACE) experiment, recently established in a nutrient- and water-limited woodlands, presents a unique opportunity to address this uncertainty, but can best do so if key model uncertainties have been identified in advance. Moreover, we applied seven vegetation models, which have previously been comprehensively assessed against earlier forest FACE experi- ments, to simulate a priori possible outcomes from EucFACE. Our goals were to provide quantitative projections against which to evaluatemore » data as they are collected, and to identify key measurements that should be made in the experiment to allow discrimination among alternative model assumptions in a postexperiment model intercompari- son. Simulated responses of annual net primary productivity (NPP) to elevated Ca ranged from 0.5 to 25% across models. The simulated reduction of NPP during a low-rainfall year also varied widely, from 24 to 70%. Key processes where assumptions caused disagreement among models included nutrient limitations to growth; feedbacks to nutri- ent uptake; autotrophic respiration; and the impact of low soil moisture availability on plant processes. Finally, knowledge of the causes of variation among models is now guiding data collection in the experiment, with the expectation that the experimental data can optimally inform future model improvements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medlyn, Belinda E.; De Kauwe, Martin G.; Zaehle, Sönke
One major uncertainty in Earth System models is the response of terrestrial ecosystems to rising atmospheric CO 2 concentration (Ca), particularly under nutrient-lim- ited conditions. The Eucalyptus Free-Air CO 2 Enrichment (EucFACE) experiment, recently established in a nutrient- and water-limited woodlands, presents a unique opportunity to address this uncertainty, but can best do so if key model uncertainties have been identified in advance. Moreover, we applied seven vegetation models, which have previously been comprehensively assessed against earlier forest FACE experi- ments, to simulate a priori possible outcomes from EucFACE. Our goals were to provide quantitative projections against which to evaluatemore » data as they are collected, and to identify key measurements that should be made in the experiment to allow discrimination among alternative model assumptions in a postexperiment model intercompari- son. Simulated responses of annual net primary productivity (NPP) to elevated Ca ranged from 0.5 to 25% across models. The simulated reduction of NPP during a low-rainfall year also varied widely, from 24 to 70%. Key processes where assumptions caused disagreement among models included nutrient limitations to growth; feedbacks to nutri- ent uptake; autotrophic respiration; and the impact of low soil moisture availability on plant processes. Finally, knowledge of the causes of variation among models is now guiding data collection in the experiment, with the expectation that the experimental data can optimally inform future model improvements.« less
Remote Sensing of Extraterrestrial life: Complexity as the key characteristicsof living systems
NASA Astrophysics Data System (ADS)
Wolf, Sebastian
2015-07-01
Motivated by the detection of planetary candidates around more than one thousand stars since 1995 and the beginning characterization of their major properties (orbit, mass, physical conditions and chemical composition of their atmosphere), the quest for understanding the origin and evolution of life from the broadest possible perspective comes into reach of scientific exploration. Due to the apparent lack of a better starting point, the search for life outside Earth is strongly influenced and guided by biological and biochemical studies of life on our planet so far. Furthermore, this search is built on the assumption that life - in the sense of animated matter - is qualitatively different from inanimate matter. However, the first constraint might unnecessarily limit our search, while the latter underlying assumption is not justified. In this study, a more general approach to search for life in the universe with astrophysical means is proposed, which is not based on the above constraint and assumption. More specifically, the property of living systems to possess a high degree of complexity in structure and its response to the environment is discussed in view of its potential to be used for remote sensing of extraterrestrial life.
Eisenring, Michael; Meissle, Michael; Hagenbucher, Steffen; Naranjo, Steven E; Wettstein, Felix; Romeis, Jörg
2017-01-01
In its defense against herbivores, cotton ( Gossypium sp.) relies in part on the production of a set of inducible, non-volatile terpenoids. Under uniform damage levels, in planta allocation of induced cotton terpenoids has been found to be highest in youngest leaves, supporting assumptions of the optimal defense theory (ODT) which predicts that plants allocate defense compounds to tissues depending on their value and the likelihood of herbivore attack. However, our knowledge is limited on how varying, and thus more realistic, damage levels might affect cotton defense organization. We hypothesized that the allocation of terpenoids and densities of terpenoid-storing glands in leaves aligns with assumptions of the ODT, even when plants are subjected to temporally, spatially and quantitatively varying caterpillar ( Heliothis virescens ) damage. As expected, cotton plants allocated most of their defenses to their youngest leaves regardless of damage location. However, defense induction in older leaves varied with damage location. For at least 14 days after damage treatments ended, plants reallocated defense resources from previously young leaves to newly developed leaves. Furthermore, we observed a positive hyperbolic relationship between leaf damage area and both terpenoid concentrations and gland densities, indicating that cotton plants can fine-tune defense allocation. Although it appears that factors like vascular constraints and chemical properties of individual defense compounds can affect defense levels, our results overall demonstrate that induced defense organization of cotton subjected to varying damage treatments is in alignment with key assumptions of the ODT.
Eisenring, Michael; Meissle, Michael; Hagenbucher, Steffen; Naranjo, Steven E.; Wettstein, Felix; Romeis, Jörg
2017-01-01
In its defense against herbivores, cotton (Gossypium sp.) relies in part on the production of a set of inducible, non-volatile terpenoids. Under uniform damage levels, in planta allocation of induced cotton terpenoids has been found to be highest in youngest leaves, supporting assumptions of the optimal defense theory (ODT) which predicts that plants allocate defense compounds to tissues depending on their value and the likelihood of herbivore attack. However, our knowledge is limited on how varying, and thus more realistic, damage levels might affect cotton defense organization. We hypothesized that the allocation of terpenoids and densities of terpenoid-storing glands in leaves aligns with assumptions of the ODT, even when plants are subjected to temporally, spatially and quantitatively varying caterpillar (Heliothis virescens) damage. As expected, cotton plants allocated most of their defenses to their youngest leaves regardless of damage location. However, defense induction in older leaves varied with damage location. For at least 14 days after damage treatments ended, plants reallocated defense resources from previously young leaves to newly developed leaves. Furthermore, we observed a positive hyperbolic relationship between leaf damage area and both terpenoid concentrations and gland densities, indicating that cotton plants can fine-tune defense allocation. Although it appears that factors like vascular constraints and chemical properties of individual defense compounds can affect defense levels, our results overall demonstrate that induced defense organization of cotton subjected to varying damage treatments is in alignment with key assumptions of the ODT. PMID:28270830
Sampling Assumptions in Inductive Generalization
ERIC Educational Resources Information Center
Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.
2012-01-01
Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…
Baumel, Amit; Baker, Justin; Birnbaum, Michael L; Christensen, Helen; De Choudhury, Munmun; Mohr, David C; Muench, Fred; Schlosser, Danielle; Titov, Nick; Kane, John M
2018-05-01
Technology provides an unparalleled opportunity to remove barriers to earlier identification and engagement in services for mental and addictive disorders by reaching people earlier in the course of illness and providing links to just-in-time, cost-effective interventions. Achieving this opportunity, however, requires stakeholders to challenge underlying assumptions about traditional pathways to mental health care. In this Open Forum, the authors highlight key issues discussed in the Technology for Early Awareness of Addiction and Mental Illness (TEAAM-I) meeting-held October 13-14, 2016, in New York City-that are related to three identified areas in which technology provides important and unique opportunities to advance early identification, increase service engagement, and decrease the duration of untreated mental and addictive disorders.
Fun with maths: exploring implications of mathematical models for malaria eradication.
Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A
2014-12-11
Mathematical analyses and modelling have an important role informing malaria eradication strategies. Simple mathematical approaches can answer many questions, but it is important to investigate their assumptions and to test whether simple assumptions affect the results. In this note, four examples demonstrate both the effects of model structures and assumptions and also the benefits of using a diversity of model approaches. These examples include the time to eradication, the impact of vaccine efficacy and coverage, drug programs and the effects of duration of infections and delays to treatment, and the influence of seasonality and migration coupling on disease fadeout. An excessively simple structure can miss key results, but simple mathematical approaches can still achieve key results for eradication strategy and define areas for investigation by more complex models.
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Annis, Charles; Sabbagh, Harold A.; Lindgren, Eric A.
2016-02-01
A comprehensive approach to NDE and SHM characterization error (CE) evaluation is presented that follows the framework of the `ahat-versus-a' regression analysis for POD assessment. Characterization capability evaluation is typically more complex with respect to current POD evaluations and thus requires engineering and statistical expertise in the model-building process to ensure all key effects and interactions are addressed. Justifying the statistical model choice with underlying assumptions is key. Several sizing case studies are presented with detailed evaluations of the most appropriate statistical model for each data set. The use of a model-assisted approach is introduced to help assess the reliability of NDE and SHM characterization capability under a wide range of part, environmental and damage conditions. Best practices of using models are presented for both an eddy current NDE sizing and vibration-based SHM case studies. The results of these studies highlight the general protocol feasibility, emphasize the importance of evaluating key application characteristics prior to the study, and demonstrate an approach to quantify the role of varying SHM sensor durability and environmental conditions on characterization performance.
Network resilience in the face of health system reform.
Sheaff, Rod; Benson, Lawrence; Farbus, Lou; Schofield, Jill; Mannion, Russell; Reeves, David
2010-03-01
Many health systems now use networks as governance structures. Network 'macroculture' is the complex of artefacts, espoused values and unarticulated assumptions through which network members coordinate network activities. Knowledge of how network macroculture during 2006-2008 develops is therefore of value for understanding how health networks operate, how health system reforms affect them, and how networks function (and can be used) as governance structures. To examine how quasi-market reforms impact upon health networks' macrocultures we systematically compared longitudinal case studies of these impacts across two care networks, a programme network and a user-experience network in the English NHS. We conducted interviews with key informants, focus groups, non-participant observations of meetings and analyses of key documents. We found that in these networks, artefacts adapted to health system reform faster than espoused values did, and the latter adapted faster than basic underlying assumptions. These findings contribute to knowledge by providing empirical support for theories which hold that changes in networks' core practical activity are what stimulate changes in other aspects of network macroculture. The most powerful way of using network macroculture to manage the formation and operation of health networks therefore appears to be by focusing managerial activity on the ways in which networks produce their core artefacts. 2009 Elsevier Ltd. All rights reserved.
Initial condition of stochastic self-assembly
NASA Astrophysics Data System (ADS)
Davis, Jason K.; Sindi, Suzanne S.
2016-02-01
The formation of a stable protein aggregate is regarded as the rate limiting step in the establishment of prion diseases. In these systems, once aggregates reach a critical size the growth process accelerates and thus the waiting time until the appearance of the first critically sized aggregate is a key determinant of disease onset. In addition to prion diseases, aggregation and nucleation is a central step of many physical, chemical, and biological process. Previous studies have examined the first-arrival time at a critical nucleus size during homogeneous self-assembly under the assumption that at time t =0 the system was in the all-monomer state. However, in order to compare to in vivo biological experiments where protein constituents inherited by a newly born cell likely contain intermediate aggregates, other possibilities must be considered. We consider one such possibility by conditioning the unique ergodic size distribution on subcritical aggregate sizes; this least-informed distribution is then used as an initial condition. We make the claim that this initial condition carries fewer assumptions than an all-monomer one and verify that it can yield significantly different averaged waiting times relative to the all-monomer condition under various models of assembly.
Maharaj, Akash V.; Zhang, Yi; Ramshaw, B. J.; ...
2016-03-01
Using an exact numerical solution and semiclassical analysis, we investigate quantum oscillations (QOs) in a model of a bilayer system with an anisotropic (elliptical) electron pocket in each plane. Key features of QO experiments in the high temperature superconducting cuprate YBCO can be reproduced by such a model, in particular the pattern of oscillation frequencies (which reflect “magnetic breakdown” between the two pockets) and the polar and azimuthal angular dependence of the oscillation amplitudes. However, the requisite magnetic breakdown is possible only under the assumption that the horizontal mirror plane symmetry is spontaneously broken and that the bilayer tunneling t ⊥ is substantially renormalized from its ‘bare’ value. Lastly, under the assumption that t ⊥ =more » $$\\sim\\atop{Z}_t$$ $$(0)\\atop{⊥}$$, where $$\\sim\\atop{Z}$$ is a measure of the quasiparticle weight, this suggests that $$\\sim\\atop{Z}$$ ≲ 1/20. Detailed comparisons with new YBa 2Cu 3O 6.58 QO data, taken over a very broad range of magnetic field, confirm specific predictions made by the breakdown scenario.« less
Relative Reinforcer Rates and Magnitudes Do Not Control Concurrent Choice Independently
ERIC Educational Resources Information Center
Elliffe, Douglas; Davison, Michael; Landon, Jason
2008-01-01
One assumption of the matching approach to choice is that different independent variables control choice independently of each other. We tested this assumption for reinforcer rate and magnitude in an extensive parametric experiment. Five pigeons responded for food reinforcement on switching-key concurrent variable-interval variable-interval…
Medical cost analysis: application to colorectal cancer data from the SEER Medicare database.
Bang, Heejung
2005-10-01
Incompleteness is a key feature of most survival data. Numerous well established statistical methodologies and algorithms exist for analyzing life or failure time data. However, induced censorship invalidates the use of those standard analytic tools for some survival-type data such as medical costs. In this paper, some valid methods currently available for analyzing censored medical cost data are reviewed. Some cautionary findings under different assumptions are envisioned through application to medical costs from colorectal cancer patients. Cost analysis should be suitably planned and carefully interpreted under various meaningful scenarios even with judiciously selected statistical methods. This approach would be greatly helpful to policy makers who seek to prioritize health care expenditures and to assess the elements of resource use.
A contemporary approach to validity arguments: a practical guide to Kane's framework.
Cook, David A; Brydges, Ryan; Ginsburg, Shiphra; Hatala, Rose
2015-06-01
Assessment is central to medical education and the validation of assessments is vital to their use. Earlier validity frameworks suffer from a multiplicity of types of validity or failure to prioritise among sources of validity evidence. Kane's framework addresses both concerns by emphasising key inferences as the assessment progresses from a single observation to a final decision. Evidence evaluating these inferences is planned and presented as a validity argument. We aim to offer a practical introduction to the key concepts of Kane's framework that educators will find accessible and applicable to a wide range of assessment tools and activities. All assessments are ultimately intended to facilitate a defensible decision about the person being assessed. Validation is the process of collecting and interpreting evidence to support that decision. Rigorous validation involves articulating the claims and assumptions associated with the proposed decision (the interpretation/use argument), empirically testing these assumptions, and organising evidence into a coherent validity argument. Kane identifies four inferences in the validity argument: Scoring (translating an observation into one or more scores); Generalisation (using the score[s] as a reflection of performance in a test setting); Extrapolation (using the score[s] as a reflection of real-world performance), and Implications (applying the score[s] to inform a decision or action). Evidence should be collected to support each of these inferences and should focus on the most questionable assumptions in the chain of inference. Key assumptions (and needed evidence) vary depending on the assessment's intended use or associated decision. Kane's framework applies to quantitative and qualitative assessments, and to individual tests and programmes of assessment. Validation focuses on evaluating the key claims, assumptions and inferences that link assessment scores with their intended interpretations and uses. The Implications and associated decisions are the most important inferences in the validity argument. © 2015 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Flynn, Erin E.; Schachter, Rachel E.
2017-01-01
This study investigated eight prekindergarten teachers' underlying assumptions about how children learn, and how these assumptions were used to inform and enact instruction. By contextualizing teachers' knowledge and understanding as it is used in practice we were able to provide unique insight into the work of teaching. Participants focused on…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Kyle W.; Gauntt, Randall O.; Cardoni, Jeffrey N.
2013-11-01
Data, a brief description of key boundary conditions, and results of Sandia National Laboratories’ ongoing MELCOR analysis of the Fukushima Unit 2 accident are given for the reactor core isolation cooling (RCIC) system. Important assumptions and related boundary conditions in the current analysis additional to or different than what was assumed/imposed in the work of SAND2012-6173 are identified. This work is for the U.S. Department of Energy’s Nuclear Energy University Programs fiscal year 2014 Reactor Safety Technologies Research and Development Program RC-7: RCIC Performance under Severe Accident Conditions.
An integrated communications demand model
NASA Astrophysics Data System (ADS)
Doubleday, C. F.
1980-11-01
A computer model of communications demand is being developed to permit dynamic simulations of the long-term evolution of demand for communications media in the U.K. to be made under alternative assumptions about social, economic and technological trends in British Telecom's business environment. The context and objectives of the project and the potential uses of the model are reviewed, and four key concepts in the demand for communications media, around which the model is being structured are discussed: (1) the generation of communications demand; (2) substitution between media; (3) technological convergence; and (4) competition. Two outline perspectives on the model itself are given.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
Art meets science: The Cosmopolitan Chicken Research Project.
Stinckens, A; Vereijken, A; Ons, E; Konings, P; Van As, P; Cuppens, H; Moreau, Y; Sakai, R; Aerts, J; Goddeeris, B; Buys, N; Vanmechelen, K; Cassiman, J J
2015-01-01
The Cosmopolitan Chicken Project is an artistic undertaking of renowned artist Koen Vanmechelen. In this project, the artist interbreeds domestic chickens from different countries aiming at the creation of a true Cosmopolitan Chicken as a symbol for global diversity. The unifying theme is the chicken and the egg, symbols that link scientific, political, philosophical and ethical issues. The Cosmopolitan Chicken Research Project is the scientific component of this artwork. Based on state of the art genomic techniques, the project studies the effect of the crossing of chickens on the genetic diversity. Also, this research is potentially applicable to the human population. The setup of the CC®P is quite different from traditional breeding experiments: starting from the crossbreed of two purebred chickens (Mechelse Koekoek x Poule de Bresse), every generation is crossed with a few animals from another breed. For 26 of these purebred and crossbred populations, genetic diversity was measured (1) under the assumption that populations were sufficiently large to maintain all informative SNP within a generation and (2) under the circumstances of the CCP breeding experiment. Under the first assumption, a steady increase in genetic diversity was witnessed over the consecutive generations, thus indeed indicating the creation of a "Cosmopolitan Chicken Genome". However, under the conditions of the CCP, which reflects the reality within the human population, diversity is seen to fluctuate within given boundaries instead of steadily increasing. A reflection on this might be that this is because, in humans, an evolutionary optimum in genetic diversity is reached. Key words.
Privacy Preserving Facial and Fingerprint Multi-biometric Authentication
NASA Astrophysics Data System (ADS)
Anzaku, Esla Timothy; Sohn, Hosik; Ro, Yong Man
The cases of identity theft can be mitigated by the adoption of secure authentication methods. Biohashing and its variants, which utilizes secret keys and biometrics, are promising methods for secure authentication; however, their shortcoming is the degraded performance under the assumption that secret keys are compromised. In this paper, we extend the concept of Biohashing to multi-biometrics - facial and fingerprint traits. We chose these traits because they are widely used, howbeit, little research attention has been given to designing privacy preserving multi-biometric systems using them. Instead of just using a single modality (facial or fingerprint), we presented a framework for using both modalities. The improved performance of the proposed method, using face and fingerprint, as against either facial or fingerprint trait used in isolation is evaluated using two chimerical bimodal databases formed from publicly available facial and fingerprint databases.
Costing bias in economic evaluations.
Frappier, Julie; Tremblay, Gabriel; Charny, Mark; Cloutier, L Martin
2015-01-01
Determining the cost-effectiveness of healthcare interventions is key to the decision-making process in healthcare. Cost comparisons are used to demonstrate the economic value of treatment options, to evaluate the impact on the insurer budget, and are often used as a key criterion in treatment comparison and comparative effectiveness; however, little guidance is available to researchers for establishing the costing of clinical events and resource utilization. Different costing methods exist, and the choice of underlying assumptions appears to have a significant impact on the results of the costing analysis. This editorial describes the importance of the choice of the costing technique and it's potential impact on the relative cost of treatment options. This editorial also calls for a more efficient approach to healthcare intervention costing in order to ensure the use of consistent costing in the decision-making process.
Origins and Traditions in Comparative Education: Challenging Some Assumptions
ERIC Educational Resources Information Center
Manzon, Maria
2018-01-01
This article questions some of our assumptions about the history of comparative education. It explores new scholarship on key actors and ways of knowing in the field. Building on the theory of the social constructedness of the field of comparative education, the paper elucidates how power shapes our scholarly histories and identities.
ERIC Educational Resources Information Center
Gore, Jennifer; Holmes, Kathryn; Smith, Max; Southgate, Erica; Albright, Jim
2015-01-01
Recent Australian government targets for higher education participation have produced a flurry of activity focused on raising the aspirations of students from low socioeconomic status (SES) backgrounds. In this paper we test two key assumptions underpinning much of this activity: that students from low-SES backgrounds hold lower career…
Gay Gifted Adolescent Suicide and Suicidal Ideation Literature: Research Barriers and Limitations
ERIC Educational Resources Information Center
Sedillo, P. J.
2015-01-01
Little empirical research has been conducted regarding suicide and suicidal ideation about gay gifted adolescents, so most of what is presented in the literature is based on theories and assumptions. One key assumption was that the psychological challenges of gay gifted youth stemming from sexual identity and giftedness contribute to suicidal…
Zipf’s word frequency law in natural language: A critical review and future directions
2014-01-01
The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf ’ s law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf’s law and are then used to evaluate many of the theoretical explanations of Zipf’s law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf’s law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data. PMID:24664880
Provably secure identity-based identification and signature schemes from code assumptions
Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940
Provably secure identity-based identification and signature schemes from code assumptions.
Song, Bo; Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.
Strategy evolution driven by switching probabilities in structured multi-agent systems
NASA Astrophysics Data System (ADS)
Zhang, Jianlei; Chen, Zengqiang; Li, Zhiqi
2017-10-01
Evolutionary mechanism driving the commonly seen cooperation among unrelated individuals is puzzling. Related models for evolutionary games on graphs traditionally assume that players imitate their successful neighbours with higher benefits. Notably, an implicit assumption here is that players are always able to acquire the required pay-off information. To relax this restrictive assumption, a contact-based model has been proposed, where switching probabilities between strategies drive the strategy evolution. However, the explicit and quantified relation between a player's switching probability for her strategies and the number of her neighbours remains unknown. This is especially a key point in heterogeneously structured system, where players may differ in the numbers of their neighbours. Focusing on this, here we present an augmented model by introducing an attenuation coefficient and evaluate its influence on the evolution dynamics. Results show that the individual influence on others is negatively correlated with the contact numbers specified by the network topologies. Results further provide the conditions under which the coexisting strategies can be calculated analytically.
Identification of Extraterrestrial Microbiology
NASA Technical Reports Server (NTRS)
Flynn, Michael; Rasky, Daniel J. (Technical Monitor)
1998-01-01
Many of the key questions addressed in the field of Astrobiology are based upon the assumption that life exists, or at one time existed, in locations throughout the universe. However, this assumption is just that, an assumption. No definitive proof exists. On Earth, life has been found to exist in many diverse environment. We believe that this tendency towards diversity supports the assumption that life could exists throughout the universe. This paper provides a summary of several innovative techniques for the detection of extraterrestrial life forms. The primary questions addressed are does life currently exist beyond Earth and if it does, is that life evolutionary related to life on Earth?
A financial planning model for estimating hospital debt capacity.
Hopkins, D S; Heath, D; Levin, P J
1982-01-01
A computer-based financial planning model was formulated to measure the impact of a major capital improvement project on the fiscal health of Stanford University Hospital. The model had to be responsive to many variables and easy to use, so as to allow for the testing of numerous alternatives. Special efforts were made to identify the key variables that needed to be presented in the model and to include all known links between capital investment, debt, and hospital operating expenses. Growth in the number of patient days of care was singled out as a major source of uncertainty that would have profound effects on the hospital's finances. Therefore this variable was subjected to special scrutiny in terms of efforts to gauge expected demographic trends and market forces. In addition, alternative base runs of the model were made under three distinct patient-demand assumptions. Use of the model enabled planners at the Stanford University Hospital (a) to determine that a proposed modernization plan was financially feasible under a reasonable (that is, not unduly optimistic) set of assumptions and (b) to examine the major sources of risk. Other than patient demand, these sources were found to be gross revenues per patient, operating costs, and future limitations on government reimbursement programs. When the likely financial consequences of these risks were estimated, both separately and in combination, it was determined that even if two or more assumptions took a somewhat more negative turn than was expected, the hospital would be able to offset adverse consequences by a relatively minor reduction in operating costs. PMID:7111658
NASA Astrophysics Data System (ADS)
Deng, Zongyi
2001-05-01
The distinction between key ideas in teaching a high school science and key ideas in the corresponding discipline of science has been largely ignored in scholarly discourse about what science teachers should teach and about what they should know. This article clarifies this distinction through exploring how and why key ideas in teaching high school physics differ from key ideas in the discipline of physics. Its theoretical underpinnings include Dewey's (1902/1990) distinction between the psychological and the logical and Harré's (1986) epistemology of science. It analyzes how and why the key ideas in teaching color, the speed of light, and light interference at the high school level differ from the key ideas at the disciplinary level. The thesis is that key ideas in teaching high school physics can differ from key ideas in the discipline in some significant ways, and that the differences manifest Dewey's distinction. As a result, the article challenges the assumption of equating key ideas in teaching a high school science with key ideas in the corresponding discipline of science, and the assumption that having a college degree in science is sufficient to teach high school science. Furthermore, the article expands the concept of pedagogical content knowledge by arguing that key ideas in teaching high school physics constitute an essential component.
Nasejje, Justine B; Mwambi, Henry
2017-09-07
Uganda just like any other Sub-Saharan African country, has a high under-five child mortality rate. To inform policy on intervention strategies, sound statistical methods are required to critically identify factors strongly associated with under-five child mortality rates. The Cox proportional hazards model has been a common choice in analysing data to understand factors strongly associated with high child mortality rates taking age as the time-to-event variable. However, due to its restrictive proportional hazards (PH) assumption, some covariates of interest which do not satisfy the assumption are often excluded in the analysis to avoid mis-specifying the model. Otherwise using covariates that clearly violate the assumption would mean invalid results. Survival trees and random survival forests are increasingly becoming popular in analysing survival data particularly in the case of large survey data and could be attractive alternatives to models with the restrictive PH assumption. In this article, we adopt random survival forests which have never been used in understanding factors affecting under-five child mortality rates in Uganda using Demographic and Health Survey data. Thus the first part of the analysis is based on the use of the classical Cox PH model and the second part of the analysis is based on the use of random survival forests in the presence of covariates that do not necessarily satisfy the PH assumption. Random survival forests and the Cox proportional hazards model agree that the sex of the household head, sex of the child, number of births in the past 1 year are strongly associated to under-five child mortality in Uganda given all the three covariates satisfy the PH assumption. Random survival forests further demonstrated that covariates that were originally excluded from the earlier analysis due to violation of the PH assumption were important in explaining under-five child mortality rates. These covariates include the number of children under the age of five in a household, number of births in the past 5 years, wealth index, total number of children ever born and the child's birth order. The results further indicated that the predictive performance for random survival forests built using covariates including those that violate the PH assumption was higher than that for random survival forests built using only covariates that satisfy the PH assumption. Random survival forests are appealing methods in analysing public health data to understand factors strongly associated with under-five child mortality rates especially in the presence of covariates that violate the proportional hazards assumption.
Stimulus-specific variability in color working memory with delayed estimation.
Bae, Gi-Yeul; Olkkonen, Maria; Allred, Sarah R; Wilson, Colin; Flombaum, Jonathan I
2014-04-08
Working memory for color has been the central focus in an ongoing debate concerning the structure and limits of visual working memory. Within this area, the delayed estimation task has played a key role. An implicit assumption in color working memory research generally, and delayed estimation in particular, is that the fidelity of memory does not depend on color value (and, relatedly, that experimental colors have been sampled homogeneously with respect to discriminability). This assumption is reflected in the common practice of collapsing across trials with different target colors when estimating memory precision and other model parameters. Here we investigated whether or not this assumption is secure. To do so, we conducted delayed estimation experiments following standard practice with a memory load of one. We discovered that different target colors evoked response distributions that differed widely in dispersion and that these stimulus-specific response properties were correlated across observers. Subsequent experiments demonstrated that stimulus-specific responses persist under higher memory loads and that at least part of the specificity arises in perception and is eventually propagated to working memory. Posthoc stimulus measurement revealed that rendered stimuli differed from nominal stimuli in both chromaticity and luminance. We discuss the implications of these deviations for both our results and those from other working memory studies.
Osvath, Mathias; Martin-Ordas, Gema
2014-11-05
One of the most contested areas in the field of animal cognition is non-human future-oriented cognition. We critically examine key underlying assumptions in the debate, which is mainly preoccupied with certain dichotomous positions, the most prevalent being whether or not 'real' future orientation is uniquely human. We argue that future orientation is a theoretical construct threatening to lead research astray. Cognitive operations occur in the present moment and can be influenced only by prior causation and the environment, at the same time that most appear directed towards future outcomes. Regarding the current debate, future orientation becomes a question of where on various continua cognition becomes 'truly' future-oriented. We question both the assumption that episodic cognition is the most important process in future-oriented cognition and the assumption that future-oriented cognition is uniquely human. We review the studies on future-oriented cognition in the great apes to find little doubt that our closest relatives possess such ability. We conclude by urging that future-oriented cognition not be viewed as expression of some select set of skills. Instead, research into future-oriented cognition should be approached more like research into social and physical cognition. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Osvath, Mathias; Martin-Ordas, Gema
2014-01-01
One of the most contested areas in the field of animal cognition is non-human future-oriented cognition. We critically examine key underlying assumptions in the debate, which is mainly preoccupied with certain dichotomous positions, the most prevalent being whether or not ‘real’ future orientation is uniquely human. We argue that future orientation is a theoretical construct threatening to lead research astray. Cognitive operations occur in the present moment and can be influenced only by prior causation and the environment, at the same time that most appear directed towards future outcomes. Regarding the current debate, future orientation becomes a question of where on various continua cognition becomes ‘truly’ future-oriented. We question both the assumption that episodic cognition is the most important process in future-oriented cognition and the assumption that future-oriented cognition is uniquely human. We review the studies on future-oriented cognition in the great apes to find little doubt that our closest relatives possess such ability. We conclude by urging that future-oriented cognition not be viewed as expression of some select set of skills. Instead, research into future-oriented cognition should be approached more like research into social and physical cognition. PMID:25267827
Missing data in trial‐based cost‐effectiveness analysis: An incomplete journey
Gomes, Manuel; Carpenter, James R.
2018-01-01
SUMMARY Cost‐effectiveness analyses (CEA) conducted alongside randomised trials provide key evidence for informing healthcare decision making, but missing data pose substantive challenges. Recently, there have been a number of developments in methods and guidelines addressing missing data in trials. However, it is unclear whether these developments have permeated CEA practice. This paper critically reviews the extent of and methods used to address missing data in recently published trial‐based CEA. Issues of the Health Technology Assessment journal from 2013 to 2015 were searched. Fifty‐two eligible studies were identified. Missing data were very common; the median proportion of trial participants with complete cost‐effectiveness data was 63% (interquartile range: 47%–81%). The most common approach for the primary analysis was to restrict analysis to those with complete data (43%), followed by multiple imputation (30%). Half of the studies conducted some sort of sensitivity analyses, but only 2 (4%) considered possible departures from the missing‐at‐random assumption. Further improvements are needed to address missing data in cost‐effectiveness analyses conducted alongside randomised trials. These should focus on limiting the extent of missing data, choosing an appropriate method for the primary analysis that is valid under contextually plausible assumptions, and conducting sensitivity analyses to departures from the missing‐at‐random assumption. PMID:29573044
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriegler, Elmar; Edmonds, James A.; Hallegatte, Stephane
2014-04-01
The paper presents the concept of shared climate policy assumptions as an important element of the new scenario framework. Shared climate policy assumptions capture key climate policy dimensions such as the type and scale of mitigation and adaptation measures. They are not specified in the socio-economic reference pathways, and therefore introduce an important third dimension to the scenario matrix architecture. Climate policy assumptions will have to be made in any climate policy scenario, and can have a significant impact on the scenario description. We conclude that a meaningful set of shared climate policy assumptions is useful for grouping individual climatemore » policy analyses and facilitating their comparison. Shared climate policy assumptions should be designed to be policy relevant, and as a set to be broad enough to allow a comprehensive exploration of the climate change scenario space.« less
Conclusion: Agency in the face of complexity and the future of assumption-aware evaluation practice.
Morrow, Nathan; Nkwake, Apollo M
2016-12-01
This final chapter in the volume pulls together common themes from the diverse set of articles by a group of eight authors in this issue, and presents some reflections on the next steps for improving the ways in which evaluators work with assumptions. Collectively, the authors provide a broad overview of existing and emerging approaches to the articulation and use of assumptions in evaluation theory and practice. The authors reiterate the rationale and key terminology as a common basis for working with assumption in program design and evaluation. They highlight some useful concepts and categorizations to promote more rigorous treatment of assumptions in evaluation. A three-tier framework for fostering agency for assumption-aware evaluation practice is proposed-agency for themselves (evaluators); agency for others (stakeholders); and agency for standards and principles. Copyright © 2016 Elsevier Ltd. All rights reserved.
Meta-analysis using Dirichlet process.
Muthukumarana, Saman; Tiwari, Ram C
2016-02-01
This article develops a Bayesian approach for meta-analysis using the Dirichlet process. The key aspect of the Dirichlet process in meta-analysis is the ability to assess evidence of statistical heterogeneity or variation in the underlying effects across study while relaxing the distributional assumptions. We assume that the study effects are generated from a Dirichlet process. Under a Dirichlet process model, the study effects parameters have support on a discrete space and enable borrowing of information across studies while facilitating clustering among studies. We illustrate the proposed method by applying it to a dataset on the Program for International Student Assessment on 30 countries. Results from the data analysis, simulation studies, and the log pseudo-marginal likelihood model selection procedure indicate that the Dirichlet process model performs better than conventional alternative methods. © The Author(s) 2012.
Parallel factor ChIP provides essential internal control for quantitative differential ChIP-seq.
Guertin, Michael J; Cullen, Amy E; Markowetz, Florian; Holding, Andrew N
2018-04-17
A key challenge in quantitative ChIP combined with high-throughput sequencing (ChIP-seq) is the normalization of data in the presence of genome-wide changes in occupancy. Analysis-based normalization methods were developed for transcriptomic data and these are dependent on the underlying assumption that total transcription does not change between conditions. For genome-wide changes in transcription factor (TF) binding, these assumptions do not hold true. The challenges in normalization are confounded by experimental variability during sample preparation, processing and recovery. We present a novel normalization strategy utilizing an internal standard of unchanged peaks for reference. Our method can be readily applied to monitor genome-wide changes by ChIP-seq that are otherwise lost or misrepresented through analytical normalization. We compare our approach to normalization by total read depth and two alternative methods that utilize external experimental controls to study TF binding. We successfully resolve the key challenges in quantitative ChIP-seq analysis and demonstrate its application by monitoring the loss of Estrogen Receptor-alpha (ER) binding upon fulvestrant treatment, ER binding in response to estrodiol, ER mediated change in H4K12 acetylation and profiling ER binding in patient-derived xenographs. This is supported by an adaptable pipeline to normalize and quantify differential TF binding genome-wide and generate metrics for differential binding at individual sites.
NASA Astrophysics Data System (ADS)
Walker, A. P.; Zaehle, S.; Medlyn, B. E.; De Kauwe, M. G.; Asao, S.; Hickler, T.; Lomas, M. R.; Pak, B. C.; Parton, W. J.; Quegan, S.; Ricciuto, D. M.; Wang, Y.; Warlind, D.; Norby, R. J.
2013-12-01
Predicting forest carbon (C) sequestration requires understanding the processes leading to rates of biomass C accrual (net primary productivity; NPP) and loss (turnover). In temperate forest ecosystems, experiments and models have shown that feedback via progressive nitrogen limitation (PNL) is a key driver of NPP responses to elevated CO[2]. In this analysis we show that while still important, PNL may not be as severe a constraint on NPP as indicated by some studies and that the response of turnover to elevated CO[2] could be as important, especially in the near to medium term. Seven terrestrial ecosystem and biosphere models that couple C and N cycles with varying assumptions and complexity were used to simulate responses over 300 years to a step change in CO[2] to 550 ppmv. Simulations were run for the evergreen needleleaf Duke forest and the deciduous broadleaf Oak Ridge forest FACE experiments. Whether or not a model simulated PNL under elevated CO[2] depended on model structure and the timescale of observation. Avoiding PNL depended on mechanisms that reduced ecosystem N losses. The two key assumptions that reduced N losses were whether plant N uptake was based on plant N demand and whether ecosystem N losses (volatisation and leaching) were dependent on the concentration of N in the soil solution. Assumptions on allocation and turnover resulted in very different responses of turnover to elevated CO[2], which had profound implications for C sequestration. For example, at equilibrium CABLE2.0 predicted an increase in vegetation C sequestration despite decreased NPP, while O-CN predicted much less vegetation C sequestration than would be expected from predicted NPP increases alone. Generally elevated CO[2] favoured a shift in C partitioning towards longer lived wood biomass, which increased vegetation turnover and enhanced C sequestration. Enhanced wood partitioning was overlaid by increases or decreases in self-thinning depended on whether self-thinning was simply a function of forest structure, or structure and NPP. Self-thinning assumptions altered equilibrium C sequestration and were extremely important for the immediate transient response and near-term prediction of C sequestration.
NASA Astrophysics Data System (ADS)
Klasic, M. R.; Ekstrom, J.; Bedsworth, L. W.; Baker, Z.
2017-12-01
Extreme events such as wildfires, droughts, and flooding are projected to be more frequent and intense under a changing climate, increasing challenges to water quality management. To protect and improve public health, drinking water utility managers need to understand and plan for climate change and extreme events. This three year study began with the assumption that improved climate projections were key to advancing climate adaptation at the local level. Through a survey (N = 259) and interviews (N = 61) with California drinking water utility managers during the peak of the state's recent drought, we found that scientific information was not a key barrier hindering adaptation. Instead, we found that managers fell into three distinct mental models based on their interaction with, perceptions, and attitudes, towards scientific information and the future of water in their system. One of the mental models, "modeled futures", is a concept most in line with how climate change scientists talk about the use of information. Drinking water utilities falling into the "modeled future" category tend to be larger systems that have adequate capacity to both receive and use scientific information. Medium and smaller utilities in California, that more often serve rural low income communities, tend to fall into the other two mental models, "whose future" and "no future". We show evidence that there is an implicit presumption that all drinking water utility managers should strive to align with "modeled future" mental models. This presentation questions this assumption as it leaves behind many utilities that need to adapt to climate change (several thousand in California alone), but may not have the technical, financial, managerial, or other capacity to do so. It is clear that no single solution or pathway to drought resilience exists for water utilities, but we argue that a more explicit understanding and definition of what it means to be a resilient drinking water utility is necessary. By highlighting, then questioning, the assumption that all utility managers should strive to have "modeled future" mentalities, this presentation seeks to foster an open dialogue around which pathway or pathways are most feasible for supporting drinking water utility managers planning for climate change.
Economic evaluation in chronic pain: a systematic review and de novo flexible economic model.
Sullivan, W; Hirst, M; Beard, S; Gladwell, D; Fagnani, F; López Bastida, J; Phillips, C; Dunlop, W C N
2016-07-01
There is unmet need in patients suffering from chronic pain, yet innovation may be impeded by the difficulty of justifying economic value in a field beset by data limitations and methodological variability. A systematic review was conducted to identify and summarise the key areas of variability and limitations in modelling approaches in the economic evaluation of treatments for chronic pain. The results of the literature review were then used to support the development of a fully flexible open-source economic model structure, designed to test structural and data assumptions and act as a reference for future modelling practice. The key model design themes identified from the systematic review included: time horizon; titration and stabilisation; number of treatment lines; choice/ordering of treatment; and the impact of parameter uncertainty (given reliance on expert opinion). Exploratory analyses using the model to compare a hypothetical novel therapy versus morphine as first-line treatments showed cost-effectiveness results to be sensitive to structural and data assumptions. Assumptions about the treatment pathway and choice of time horizon were key model drivers. Our results suggest structural model design and data assumptions may have driven previous cost-effectiveness results and ultimately decisions based on economic value. We therefore conclude that it is vital that future economic models in chronic pain are designed to be fully transparent and hope our open-source code is useful in order to aspire to a common approach to modelling pain that includes robust sensitivity analyses to test structural and parameter uncertainty.
Mathematical modelling of clostridial acetone-butanol-ethanol fermentation.
Millat, Thomas; Winzer, Klaus
2017-03-01
Clostridial acetone-butanol-ethanol (ABE) fermentation features a remarkable shift in the cellular metabolic activity from acid formation, acidogenesis, to the production of industrial-relevant solvents, solventogensis. In recent decades, mathematical models have been employed to elucidate the complex interlinked regulation and conditions that determine these two distinct metabolic states and govern the transition between them. In this review, we discuss these models with a focus on the mechanisms controlling intra- and extracellular changes between acidogenesis and solventogenesis. In particular, we critically evaluate underlying model assumptions and predictions in the light of current experimental knowledge. Towards this end, we briefly introduce key ideas and assumptions applied in the discussed modelling approaches, but waive a comprehensive mathematical presentation. We distinguish between structural and dynamical models, which will be discussed in their chronological order to illustrate how new biological information facilitates the 'evolution' of mathematical models. Mathematical models and their analysis have significantly contributed to our knowledge of ABE fermentation and the underlying regulatory network which spans all levels of biological organization. However, the ties between the different levels of cellular regulation are not well understood. Furthermore, contradictory experimental and theoretical results challenge our current notion of ABE metabolic network structure. Thus, clostridial ABE fermentation still poses theoretical as well as experimental challenges which are best approached in close collaboration between modellers and experimentalists.
Minne, Elizabeth Portman; Semrud-Clikeman, Margaret
2012-11-01
The key features of Asperger Syndrome (AS) and high functioning autism (HFA) include marked and sustained impairment in social interactions. A multi-session, small group program was developed to increase social perception based on the assumption perceptual or interpretive problems underlying these social difficulties. Additionally, the group format espoused a play therapy orientation and the use of sociodramatic play was the primary therapeutic modality used. Qualitative analyses of the data resulted in an explanation of the key changes in social interactions that took place through the course of the intervention. Although each participant's experience in this group was unique, all children in this program demonstrated improvements in their social interactions, as they experienced development both emotionally and behaviorally. Findings suggest that, despite their rigid interests and behavior patterns, the social limitations of these children improved when provided with the necessary environmental resources.
Machado, Armando; Pata, Paulo
2005-02-01
Two theories of timing, scalar expectancy theory (SET) and learning-to-time (LeT), make substantially different assumptions about what animals learn in temporal tasks. In a test of these assumptions, pigeons learned two temporal discriminations. On Type 1 trials, they learned to choose a red key after a 1-sec signal and a green key after a 4-sec signal; on Type 2 trials, they learned to choose a blue key after a 4-sec signal and a yellow key after either an 8-sec signal (Group 8) or a 16-sec signal (Group 16). Then, the birds were exposed to signals 1 sec, 4 sec, and 16 sec in length and given a choice between novel key combinations (red or green vs. blue or yellow). The choice between the green key and the blue key was of particular significance because both keys were associated with the same 4-sec signal. Whereas SET predicted no effect of the test signal duration on choice, LeT predicted that preference for green would increase monotonically with the length of the signal but would do so faster for Group 8 than for Group 16. The results were consistent with LeT, but not with SET.
A proper metaphysics for cognitive performance.
Van Orden, Guy C; Moreno, Miguel A; Holden, John G
2003-01-01
The general failure to individuate component causes in cognitive performance suggests the need for an alternative metaphysics. The metaphysics of control hierarchy theory accommodates the fact of self-organization in nature and the possibility that intentional actions are self-organized. One key assumption is that interactions among processes dominate their intrinsic dynamics. Scaling relations in response time variability motivate this assumption in cognitive performance.
Ethical and legal challenges of vaccines and vaccination: Reflections.
Jesani, Amar; Johari, Veena
2017-01-01
Vaccines and vaccination have emerged as key medical scientific tools for prevention of certain diseases. Documentation of the history of vaccination shows that the initial popular resistance to universal vaccination was based on false assumptions and eventually gave way to acceptance of vaccines and trust in their ability to save lives. The successes of the global eradication of smallpox, and now of polio, have only strengthened the premier position occupied by vaccines in disease prevention. However, the success of vaccines and public trust in their ability to eradicate disease are now under challenge, as increasing numbers of people refuse vaccination, questioning the effectiveness of vaccines and the need to vaccinate.
Optimization of Designs for Nanotube-based Scanning Probes
NASA Technical Reports Server (NTRS)
Harik, V. M.; Gates, T. S.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Optimization of designs for nanotube-based scanning probes, which may be used for high-resolution characterization of nanostructured materials, is examined. Continuum models to analyze the nanotube deformations are proposed to help guide selection of the optimum probe. The limitations on the use of these models that must be accounted for before applying to any design problem are presented. These limitations stem from the underlying assumptions and the expected range of nanotube loading, end conditions, and geometry. Once the limitations are accounted for, the key model parameters along with the appropriate classification of nanotube structures may serve as a basis for the design optimization of nanotube-based probe tips.
Naimi, Ashley I
2015-07-15
Epidemiologists are increasingly using natural effects for applied mediation analyses, yet 1 key identifying assumption is unintuitive and subject to some controversy. In this issue of the Journal, Jiang and VanderWeele (Am J Epidemiol. 2015;182(2):105-108) formalize the conditions under which the difference method can be used to estimate natural indirect effects. In this commentary, I discuss implications of the controversial "cross-worlds" independence assumption needed to identify natural effects. I argue that with a binary mediator, a simple modification of the authors' approach will provide bounds for natural direct and indirect effect estimates that better reflect the capacity of the available data to support empirical statements on the presence of mediated effects. I discuss complications encountered when odds ratios are used to decompose effects, as well as the implications of incorrectly assuming the absence of exposure-induced mediator-outcome confounders. I note that the former problem can be entirely resolved using collapsible measures of effect, such as risk ratios. In the Appendix, I use previous derivations for natural direct effect bounds on the risk difference scale to provide bounds on the odds ratio scale that accommodate 1) uncertainty due to the cross-world independence assumption and 2) uncertainty due to the cross-world independence assumption and the presence of exposure-induced mediator-outcome confounders. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Goodman, Claire; Davies, Sue L.; Gordon, Adam L.; Meyer, Julienne; Dening, Tom; Gladman, John R.F.; Iliffe, Steve; Zubair, Maria; Bowman, Clive; Victor, Christina; Martin, Finbarr C.
2015-01-01
Objectives To explore what commissioners of care, regulators, providers, and care home residents in England identify as the key mechanisms or components of different service delivery models that support the provision of National Health Service (NHS) provision to independent care homes. Methods Qualitative, semistructured interviews with a purposive sample of people with direct experience of commissioning, providing, and regulating health care provision in care homes and care home residents. Data from interviews were augmented by a secondary analysis of previous interviews with care home residents on their personal experience of and priorities for access to health care. Analysis was framed by the assumptions of realist evaluation and drew on the constant comparative method to identify key themes about what is required to achieve quality health care provision to care homes and resident health. Results Participants identified 3 overlapping approaches to the provision of NHS that they believed supported access to health care for older people in care homes: (1) Investment in relational working that fostered continuity and shared learning between visiting NHS staff and care home staff, (2) the provision of age-appropriate clinical services, and (3) governance arrangements that used contractual and financial incentives to specify a minimum service that care homes should receive. Conclusion The 3 approaches, and how they were typified as working, provide a rich picture of the stakeholder perspectives and the underlying assumptions about how service delivery models should work with care homes. The findings inform how evidence on effective working in care homes will be interrogated to identify how different approaches, or specifically key elements of those approaches, achieve different health-related outcomes in different situations for residents and associated health and social care organizations. PMID:25687930
Moore, Julia L; Remais, Justin V
2014-03-01
Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.
7 CFR 1957.2 - Transfer with assumptions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Rural Housing Trust 1987-1, and who are eligible for an FmHA or its successor agency under Public Law 103-354 § 502 loan will be given the same priority by FmHA or its successor agency under Public Law.... FmHA or its successor agency under Public Law 103-354 regulations governing transfers and assumptions...
Austin, Peter C
2018-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.
Austin, Peter C.
2017-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694
Artificial Intelligence: Underlying Assumptions and Basic Objectives.
ERIC Educational Resources Information Center
Cercone, Nick; McCalla, Gordon
1984-01-01
Presents perspectives on methodological assumptions underlying research efforts in artificial intelligence (AI) and charts activities, motivations, methods, and current status of research in each of the major AI subareas: natural language understanding; computer vision; expert systems; search, problem solving, planning; theorem proving and logic…
Automated analysis in generic groups
NASA Astrophysics Data System (ADS)
Fagerholm, Edvard
This thesis studies automated methods for analyzing hardness assumptions in generic group models, following ideas of symbolic cryptography. We define a broad class of generic and symbolic group models for different settings---symmetric or asymmetric (leveled) k-linear groups --- and prove ''computational soundness'' theorems for the symbolic models. Based on this result, we formulate a master theorem that relates the hardness of an assumption to solving problems in polynomial algebra. We systematically analyze these problems identifying different classes of assumptions and obtain decidability and undecidability results. Then, we develop automated procedures for verifying the conditions of our master theorems, and thus the validity of hardness assumptions in generic group models. The concrete outcome is an automated tool, the Generic Group Analyzer, which takes as input the statement of an assumption, and outputs either a proof of its generic hardness or shows an algebraic attack against the assumption. Structure-preserving signatures are signature schemes defined over bilinear groups in which messages, public keys and signatures are group elements, and the verification algorithm consists of evaluating ''pairing-product equations''. Recent work on structure-preserving signatures studies optimality of these schemes in terms of the number of group elements needed in the verification key and the signature, and the number of pairing-product equations in the verification algorithm. While the size of keys and signatures is crucial for many applications, another aspect of performance is the time it takes to verify a signature. The most expensive operation during verification is the computation of pairings. However, the concrete number of pairings is not captured by the number of pairing-product equations considered in earlier work. We consider the question of what is the minimal number of pairing computations needed to verify structure-preserving signatures. We build an automated tool to search for structure-preserving signatures matching a template. Through exhaustive search we conjecture lower bounds for the number of pairings required in the Type~II setting and prove our conjecture to be true. Finally, our tool exhibits examples of structure-preserving signatures matching the lower bounds, which proves tightness of our bounds, as well as improves on previously known structure-preserving signature schemes.
Missing data in trial-based cost-effectiveness analysis: An incomplete journey.
Leurent, Baptiste; Gomes, Manuel; Carpenter, James R
2018-06-01
Cost-effectiveness analyses (CEA) conducted alongside randomised trials provide key evidence for informing healthcare decision making, but missing data pose substantive challenges. Recently, there have been a number of developments in methods and guidelines addressing missing data in trials. However, it is unclear whether these developments have permeated CEA practice. This paper critically reviews the extent of and methods used to address missing data in recently published trial-based CEA. Issues of the Health Technology Assessment journal from 2013 to 2015 were searched. Fifty-two eligible studies were identified. Missing data were very common; the median proportion of trial participants with complete cost-effectiveness data was 63% (interquartile range: 47%-81%). The most common approach for the primary analysis was to restrict analysis to those with complete data (43%), followed by multiple imputation (30%). Half of the studies conducted some sort of sensitivity analyses, but only 2 (4%) considered possible departures from the missing-at-random assumption. Further improvements are needed to address missing data in cost-effectiveness analyses conducted alongside randomised trials. These should focus on limiting the extent of missing data, choosing an appropriate method for the primary analysis that is valid under contextually plausible assumptions, and conducting sensitivity analyses to departures from the missing-at-random assumption. © 2018 The Authors Health Economics published by John Wiley & Sons Ltd.
Spin-diffusions and diffusive molecular dynamics
NASA Astrophysics Data System (ADS)
Farmer, Brittan; Luskin, Mitchell; Plecháč, Petr; Simpson, Gideon
2017-12-01
Metastable configurations in condensed matter typically fluctuate about local energy minima at the femtosecond time scale before transitioning between local minima after nanoseconds or microseconds. This vast scale separation limits the applicability of classical molecular dynamics (MD) methods and has spurned the development of a host of approximate algorithms. One recently proposed method is diffusive MD which aims at integrating a system of ordinary differential equations describing the likelihood of occupancy by one of two species, in the case of a binary alloy, while quasistatically evolving the locations of the atoms. While diffusive MD has shown itself to be efficient and provide agreement with observations, it is fundamentally a model, with unclear connections to classical MD. In this work, we formulate a spin-diffusion stochastic process and show how it can be connected to diffusive MD. The spin-diffusion model couples a classical overdamped Langevin equation to a kinetic Monte Carlo model for exchange amongst the species of a binary alloy. Under suitable assumptions and approximations, spin-diffusion can be shown to lead to diffusive MD type models. The key assumptions and approximations include a well-defined time scale separation, a choice of spin-exchange rates, a low temperature approximation, and a mean field type approximation. We derive several models from different assumptions and show their relationship to diffusive MD. Differences and similarities amongst the models are explored in a simple test problem.
SIMPL Systems, or: Can We Design Cryptographic Hardware without Secret Key Information?
NASA Astrophysics Data System (ADS)
Rührmair, Ulrich
This paper discusses a new cryptographic primitive termed SIMPL system. Roughly speaking, a SIMPL system is a special type of Physical Unclonable Function (PUF) which possesses a binary description that allows its (slow) public simulation and prediction. Besides this public key like functionality, SIMPL systems have another advantage: No secret information is, or needs to be, contained in SIMPL systems in order to enable cryptographic protocols - neither in the form of a standard binary key, nor as secret information hidden in random, analog features, as it is the case for PUFs. The cryptographic security of SIMPLs instead rests on (i) a physical assumption on their unclonability, and (ii) a computational assumption regarding the complexity of simulating their output. This novel property makes SIMPL systems potentially immune against many known hardware and software attacks, including malware, side channel, invasive, or modeling attacks.
Structure induction in diagnostic causal reasoning.
Meder, Björn; Mayrhofer, Ralf; Waldmann, Michael R
2014-07-01
Our research examines the normative and descriptive adequacy of alternative computational models of diagnostic reasoning from single effects to single causes. Many theories of diagnostic reasoning are based on the normative assumption that inferences from an effect to its cause should reflect solely the empirically observed conditional probability of cause given effect. We argue against this assumption, as it neglects alternative causal structures that may have generated the sample data. Our structure induction model of diagnostic reasoning takes into account the uncertainty regarding the underlying causal structure. A key prediction of the model is that diagnostic judgments should not only reflect the empirical probability of cause given effect but should also depend on the reasoner's beliefs about the existence and strength of the link between cause and effect. We confirmed this prediction in 2 studies and showed that our theory better accounts for human judgments than alternative theories of diagnostic reasoning. Overall, our findings support the view that in diagnostic reasoning people go "beyond the information given" and use the available data to make inferences on the (unobserved) causal rather than on the (observed) data level. (c) 2014 APA, all rights reserved.
Authors' response: the primacy of conscious decision making.
Shanks, David R; Newell, Ben R
2014-02-01
The target article sought to question the common belief that our decisions are often biased by unconscious influences. While many commentators offer additional support for this perspective, others question our theoretical assumptions, empirical evaluations, and methodological criteria. We rebut in particular the starting assumption that all decision making is unconscious, and that the onus should be on researchers to prove conscious influences. Further evidence is evaluated in relation to the core topics we reviewed (multiple-cue judgment, deliberation without attention, and decisions under uncertainty), as well as priming effects. We reiterate a key conclusion from the target article, namely, that it now seems to be generally accepted that awareness should be operationally defined as reportable knowledge, and that such knowledge can only be evaluated by careful and thorough probing. We call for future research to pay heed to the different ways in which awareness can intervene in decision making (as identified in our lens model analysis) and to employ suitable methodology in the assessment of awareness, including the requirements that awareness assessment must be reliable, relevant, immediate, and sensitive.
Co-Dependency: An Examination of Underlying Assumptions.
ERIC Educational Resources Information Center
Myer, Rick A.; And Others
1991-01-01
Discusses need for careful examination of codependency as diagnostic category. Critically examines assumptions that codependency is disease, addiction, or predetermined by the environment. Discusses implications of assumptions. Offers recommendations for mental health counselors focusing on need for systematic research, redirection of efforts to…
House, Thomas; Hall, Ian; Danon, Leon; Keeling, Matt J
2010-02-14
In the event of a release of a pathogen such as smallpox, which is human-to-human transmissible and has high associated mortality, a key question is how best to deploy containment and control strategies. Given the general uncertainty surrounding this issue, mathematical modelling has played an important role in informing the likely optimal response, in particular defining the conditions under which mass-vaccination would be appropriate. In this paper, we consider two key questions currently unanswered in the literature: firstly, what is the optimal spatial scale for intervention; and secondly, how sensitive are results to the modelling assumptions made about the pattern of human contacts? Here we develop a novel mathematical model for smallpox that incorporates both information on individual contact structure (which is important if the effects of contact tracing are to be captured accurately) and large-scale patterns of movement across a range of spatial scales in Great Britain. Analysis of this model confirms previous work suggesting that a locally targeted 'ring' vaccination strategy is optimal, and that this conclusion is actually quite robust for different socio-demographic and epidemiological assumptions. Our method allows for intuitive understanding of the reasons why national mass vaccination is typically predicted to be suboptimal. As such, we present a general framework for fast calculation of expected outcomes during the attempted control of diverse emerging infections; this is particularly important given that parameters would need to be interactively estimated and modelled in any release scenario.
Speelman, Craig P.; McGann, Marek
2013-01-01
In this paper we voice concerns about the uncritical manner in which the mean is often used as a summary statistic in psychological research. We identify a number of implicit assumptions underlying the use of the mean and argue that the fragility of these assumptions should be more carefully considered. We examine some of the ways in which the potential violation of these assumptions can lead us into significant theoretical and methodological error. Illustrations of alternative models of research already extant within Psychology are used to explore methods of research less mean-dependent and suggest that a critical assessment of the assumptions underlying its use in research play a more explicit role in the process of study design and review. PMID:23888147
Three-Dimensional Data Registration Based on Human Perception
2006-01-01
sets. The new algorithm was tested extensively on simulated sensor images in several scenarios key to successful application to autonomous ground...that humans perceive visual images, an assumption of stationarity can be applied to the data sets , with to compensate for any new data...proximity to each other that an assumption of, or preference for , stationarity would require corresponding data in the data sets that is not new
Why is it Doing That? - Assumptions about the FMS
NASA Technical Reports Server (NTRS)
Feary, Michael; Immanuel, Barshi; Null, Cynthia H. (Technical Monitor)
1998-01-01
In the glass cockpit, it's not uncommon to hear exclamations such as "why is it doing that?". Sometimes pilots ask "what were they thinking when they set it this way?" or "why doesn't it tell me what it's going to do next?". Pilots may hold a conceptual model of the automation that is the result of fleet lore, which may or may not be consistent with what the engineers had in mind. But what did the engineers have in mind? In this study, we present some of the underlying assumptions surrounding the glass cockpit. Engineers and designers make assumptions about the nature of the flight task; at the other end, instructor and line pilots make assumptions about how the automation works and how it was intended to be used. These underlying assumptions are seldom recognized or acknowledged, This study is an attempt to explicitly arti culate such assumptions to better inform design and training developments. This work is part of a larger project to support training strategies for automation.
Teaching "Instant Experience" with Graphical Model Validation Techniques
ERIC Educational Resources Information Center
Ekstrøm, Claus Thorn
2014-01-01
Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.
Hayes, Brett K; Heit, Evan
2018-05-01
Inductive reasoning entails using existing knowledge to make predictions about novel cases. The first part of this review summarizes key inductive phenomena and critically evaluates theories of induction. We highlight recent theoretical advances, with a special emphasis on the structured statistical approach, the importance of sampling assumptions in Bayesian models, and connectionist modeling. A number of new research directions in this field are identified including comparisons of inductive and deductive reasoning, the identification of common core processes in induction and memory tasks and induction involving category uncertainty. The implications of induction research for areas as diverse as complex decision-making and fear generalization are discussed. This article is categorized under: Psychology > Reasoning and Decision Making Psychology > Learning. © 2017 Wiley Periodicals, Inc.
Uniqueness and characterization theorems for generalized entropies
NASA Astrophysics Data System (ADS)
Enciso, Alberto; Tempesta, Piergiulio
2017-12-01
The requirement that an entropy function be composable is key: it means that the entropy of a compound system can be calculated in terms of the entropy of its independent components. We prove that, under mild regularity assumptions, the only composable generalized entropy in trace form is the Tsallis one-parameter family (which contains Boltzmann-Gibbs as a particular case). This result leads to the use of generalized entropies that are not of trace form, such as Rényi’s entropy, in the study of complex systems. In this direction, we also present a characterization theorem for a large class of composable non-trace-form entropy functions with features akin to those of Rényi’s entropy.
Provably-Secure (Chinese Government) SM2 and Simplified SM2 Key Exchange Protocols
Nam, Junghyun; Kim, Moonseong
2014-01-01
We revisit the SM2 protocol, which is widely used in Chinese commercial applications and by Chinese government agencies. Although it is by now standard practice for protocol designers to provide security proofs in widely accepted security models in order to assure protocol implementers of their security properties, the SM2 protocol does not have a proof of security. In this paper, we prove the security of the SM2 protocol in the widely accepted indistinguishability-based Bellare-Rogaway model under the elliptic curve discrete logarithm problem (ECDLP) assumption. We also present a simplified and more efficient version of the SM2 protocol with an accompanying security proof. PMID:25276863
Losses from effluent taxes and quotas under uncertainty
Watson, W.D.; Ridker, R.G.
1984-01-01
Recent theoretical papers by Adar and Griffin (J. Environ. Econ. Manag.3, 178-188 (1976)), Fishelson (J. Environ. Econ. Manag.3, 189-197 (1976)), and Weitzman (Rev. Econ. Studies41, 477-491 (1974)) show that,different expected social losses arise from using effluent taxes and quotas as alternative control instruments when marginal control costs are uncertain. Key assumptions in these analyses are linear marginal cost and benefit functions and an additive error for the marginal cost function (to reflect uncertainty). In this paper, empirically derived nonlinear functions and more realistic multiplicative error terms are used to estimate expected control and damage costs and to identify (empirically) the mix of control instruments that minimizes expected losses. ?? 1984.
Practical quantum digital signature
NASA Astrophysics Data System (ADS)
Yin, Hua-Lei; Fu, Yao; Chen, Zeng-Bing
2016-03-01
Guaranteeing nonrepudiation, unforgeability as well as transferability of a signature is one of the most vital safeguards in today's e-commerce era. Based on fundamental laws of quantum physics, quantum digital signature (QDS) aims to provide information-theoretic security for this cryptographic task. However, up to date, the previously proposed QDS protocols are impractical due to various challenging problems and most importantly, the requirement of authenticated (secure) quantum channels between participants. Here, we present the first quantum digital signature protocol that removes the assumption of authenticated quantum channels while remaining secure against the collective attacks. Besides, our QDS protocol can be practically implemented over more than 100 km under current mature technology as used in quantum key distribution.
Carpenter, John; Dickinson, Claire
2016-01-01
A key underlying assumption of interprofessional education (IPE) is that if the professions are brought together they have the opportunity to learn about each other and dispel the negative stereotypes which are presumed to hamper interprofessional collaboration in practice. This article explores the application of contact theory in IPE with reference to eight evaluation studies (1995-2012) which adopted this theoretical perspective. It proposes that educators should pay explicit attention to an intergroup perspective in designing IPE programmes and specifically to the "contact variables" identified by social psychologists studying intergroup encounters. This would increase the chances of the planned contact having a positive effect on attitude change.
Data mining of tree-based models to analyze freeway accident frequency.
Chang, Li-Yen; Chen, Wen-Chieh
2005-01-01
Statistical models, such as Poisson or negative binomial regression models, have been employed to analyze vehicle accident frequency for many years. However, these models have their own model assumptions and pre-defined underlying relationship between dependent and independent variables. If these assumptions are violated, the model could lead to erroneous estimation of accident likelihood. Classification and Regression Tree (CART), one of the most widely applied data mining techniques, has been commonly employed in business administration, industry, and engineering. CART does not require any pre-defined underlying relationship between target (dependent) variable and predictors (independent variables) and has been shown to be a powerful tool, particularly for dealing with prediction and classification problems. This study collected the 2001-2002 accident data of National Freeway 1 in Taiwan. A CART model and a negative binomial regression model were developed to establish the empirical relationship between traffic accidents and highway geometric variables, traffic characteristics, and environmental factors. The CART findings indicated that the average daily traffic volume and precipitation variables were the key determinants for freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies.
Mallinckrodt, C H; Lin, Q; Molenberghs, M
2013-01-01
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.
Key Factors that Influence Recruiting Young Chinese Students
ERIC Educational Resources Information Center
Wang, Zhenmin
2007-01-01
The discussion in this paper is based on the assumption that international education is equated to recruiting and educating international students, even though its true concept goes far beyond this narrow understanding. The purpose of this research is to look at the key factors that influence recruiting young Chinese students, and make sure all…
Haegele, Justin A; Hodge, Samuel Russell
2015-10-01
There are basic philosophical and paradigmatic assumptions that guide scholarly research endeavors, including the methods used and the types of questions asked. Through this article, kinesiology faculty and students with interests in adapted physical activity are encouraged to understand the basic assumptions of applied behavior analysis (ABA) methodology for conducting, analyzing, and presenting research of high quality in this paradigm. The purposes of this viewpoint paper are to present information fundamental to understanding the assumptions undergirding research methodology in ABA, describe key aspects of single-subject research designs, and discuss common research designs and data-analysis strategies used in single-subject studies.
A Comparative Study on Emerging Electric Vehicle Technology Assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, Jonathan; Khowailed, Gannate; Blackburn, Julia
2011-03-01
Numerous organizations have published reports in recent years that investigate the ever changing world of electric vehicle (EV) technologies and their potential effects on society. Specifically, projections have been made on greenhouse gas (GHG) emissions associated with these vehicles and how they compare to conventional vehicles or hybrid electric vehicles (HEVs). Similar projections have been made on the volumes of oil that these vehicles can displace by consuming large amounts of grid electricity instead of petroleum-based fuels. Finally, the projected rate that these new vehicle fleets will enter the market varies significantly among organizations. New ideas, technologies, and possibilities aremore » introduced often, and projected values are likely to be refined as industry announcements continue to be made. As a result, over time, a multitude of projections for GHG emissions, oil displacement, and market penetration associated with various EV technologies has resulted in a wide range of possible future outcomes. This leaves the reader with two key questions: (1) Why does such a collective range in projected values exist in these reports? (2) What assumptions have the greatest impact on the outcomes presented in these reports? Since it is impractical for an average reader to review and interpret all the various vehicle technology reports published to date, Sentech Inc. and the Oak Ridge National Laboratory have conducted a comparative study to make these interpretations. The primary objective of this comparative study is to present a snapshot of all major projections made on GHG emissions, oil displacement, or market penetration rates of EV technologies. From the extensive data found in relevant publications, the key assumptions that drive each report's analysis are identified and 'apples-to-apples' comparisons between all major report conclusions are attempted. The general approach that was taken in this comparative study is comprised of six primary steps: (1) Search Relevant Literature - An extensive search of recent analyses that address the environmental impacts, market penetration rates, and oil displacement potential of various EV technologies was conducted; (2) Consolidate Studies - Upon completion of the literature search, a list of analyses that have sufficient data for comparison and that should be included in the study was compiled; (3) Identify Key Assumptions - Disparity in conclusions very likely originates from disparity in simple assumptions. In order to compare 'apples-to-apples,' key assumptions were identified in each study to provide the basis for comparing analyses; (4) Extract Information - Each selected report was reviewed, and information on key assumptions and data points was extracted; (5) Overlay Data Points - Visual representations of the comprehensive conclusions were prepared to identify general trends and outliers; and (6) Draw Final Conclusions - Once all comparisons are made to the greatest possible extent, the final conclusions were draw on what major factors lead to the variation in results among studies.« less
Can organizations benefit from worksite health promotion?
Leviton, L C
1989-01-01
A decision-analytic model was developed to project the future effects of selected worksite health promotion activities on employees' likelihood of chronic disease and injury and on employer costs due to illness. The model employed a conservative set of assumptions and a limited five-year time frame. Under these assumptions, hypertension control and seat belt campaigns prevent a substantial amount of illness, injury, and death. Sensitivity analysis indicates that these two programs pay for themselves and under some conditions show a modest savings to the employer. Under some conditions, smoking cessation programs pay for themselves, preventing a modest amount of illness and death. Cholesterol reduction by behavioral means does not pay for itself under these assumptions. These findings imply priorities in prevention for employer and employee alike. PMID:2499556
Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M
2013-09-01
Health impact assessment (HIA) is often used to determine ex ante the health impact of an environmental policy or an environmental intervention. Underpinning any HIA is the framing assumption, which defines the causal pathways mapping environmental exposures to health outcomes. The sensitivity of the HIA to the framing assumptions is often ignored. A novel method based on fuzzy cognitive map (FCM) is developed to quantify the framing assumptions in the assessment stage of a HIA, and is then applied to a housing intervention (tightening insulation) as a case-study. Framing assumptions of the case-study were identified through a literature search of Ovid Medline (1948-2011). The FCM approach was used to identify the key variables that have the most influence in a HIA. Changes in air-tightness, ventilation, indoor air quality and mould/humidity have been identified as having the most influence on health. The FCM approach is widely applicable and can be used to inform the formulation of the framing assumptions in any quantitative HIA of environmental interventions. We argue that it is necessary to explore and quantify framing assumptions prior to conducting a detailed quantitative HIA during the assessment stage. Copyright © 2013 Elsevier Ltd. All rights reserved.
Use of Climate Information for Decision-Making and Impacts Research: State of Our Understanding
2016-03-01
SUMMARY Much of human society and its infrastructure has been designed and built on a key assumption: that future climate conditions at any given...experienced in the past. This assumption affects infrastructure design and maintenance, emergency response management, and long-term investment and planning...our scientific understanding of the climate system in a manner that incorporates user needs into the design of scientific experiments, and that
Novel Discretization Schemes for the Numerical Simulation of Membrane Dynamics
2012-09-13
Experimental data therefore plays a key role in validation. A wide variety of methods for building a simulation that meets the listed require- ments are...Despite the intrinsic nonlinearity of true membranes, simplifying assumptions may be appropriate for some applications. Based on these possible assumptions...particles determines the kinetic energy of 15 the system. Mass lumping at the particles is intrinsic (the consistent mass treat- ment of FEM is not an
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
Dark energy and key physical parameters of clusters of galaxies
NASA Astrophysics Data System (ADS)
Bisnovatyi-Kogan, G. S.; Chernin, A. D.
2012-04-01
We study physics of clusters of galaxies embedded in the cosmic dark energy background. Under the assumption that dark energy is described by the cosmological constant, we show that the dynamical effects of dark energy are strong in clusters like the Virgo cluster. Specifically, the key physical parameters of the dark mater halos in clusters are determined by dark energy: (1) the halo cut-off radius is practically, if not exactly, equal to the zero-gravity radius at which the dark matter gravity is balanced by the dark energy antigravity; (2) the halo averaged density is equal to two densities of dark energy; (3) the halo edge (cut-off) density is the dark energy density with a numerical factor of the unity order slightly depending on the halo profile. The cluster gravitational potential well in which the particles of the dark halo (as well as galaxies and intracluster plasma) move is strongly affected by dark energy: the maximum of the potential is located at the zero-gravity radius of the cluster.
Narrative Aversion: Challenges for the Illness Narrative Advocate.
Behrendt, Kathy
2017-02-01
Engaging in self-narrative is often touted as a powerful antidote to the bad effects of illness. However, there are various examples of what may broadly be termed "aversion" to illness narrative. I group these into three kinds: aversion to certain types of illness narrative; aversion to illness narrative as a whole; and aversion to illness narrative as an essentially therapeutic endeavor. These aversions can throw into doubt the advantages claimed for the illness narrator, including the key benefits of repair to the damage illness does to identity and life-trajectory. Underlying these alleged benefits are two key presuppositions: that it is the whole of one's life that is narratively unified, and that one's identity is inextricably bound up with narrative. By letting go of these assumptions, illness narrative advocates can respond to the challenges of narrative aversions. © The Author 2016. Published by Oxford University Press, on behalf of The Journal of Medicine and Philosophy Inc. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A Note on the Assumption of Identical Distributions for Nonparametric Tests of Location
ERIC Educational Resources Information Center
Nordstokke, David W.; Colp, S. Mitchell
2018-01-01
Often, when testing for shift in location, researchers will utilize nonparametric statistical tests in place of their parametric counterparts when there is evidence or belief that the assumptions of the parametric test are not met (i.e., normally distributed dependent variables). An underlying and often unattended to assumption of nonparametric…
Detector-device-independent quantum key distribution: Security analysis and fast implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boaron, Alberto; Korzh, Boris; Houlmann, Raphael
One of the most pressing issues in quantum key distribution (QKD) is the problem of detector side-channel attacks. To overcome this problem, researchers proposed an elegant “time-reversal” QKD protocol called measurement-device-independent QKD (MDI-QKD), which is based on time-reversed entanglement swapping. But, MDI-QKD is more challenging to implement than standard point-to-point QKD. Recently, we proposed an intermediary QKD protocol called detector-device-independent QKD (DDI-QKD) in order to overcome the drawbacks of MDI-QKD, with the hope that it would eventually lead to a more efficient detector side-channel-free QKD system. We analyze the security of DDI-QKD and elucidate its security assumptions. We find thatmore » DDI-QKD is not equivalent to MDI-QKD, but its security can be demonstrated with reasonable assumptions. On the more practical side, we consider the feasibility of DDI-QKD and present a fast experimental demonstration (clocked at 625 MHz), capable of secret key exchange up to more than 90 km.« less
Detector-device-independent quantum key distribution: Security analysis and fast implementation
Boaron, Alberto; Korzh, Boris; Houlmann, Raphael; ...
2016-08-09
One of the most pressing issues in quantum key distribution (QKD) is the problem of detector side-channel attacks. To overcome this problem, researchers proposed an elegant “time-reversal” QKD protocol called measurement-device-independent QKD (MDI-QKD), which is based on time-reversed entanglement swapping. But, MDI-QKD is more challenging to implement than standard point-to-point QKD. Recently, we proposed an intermediary QKD protocol called detector-device-independent QKD (DDI-QKD) in order to overcome the drawbacks of MDI-QKD, with the hope that it would eventually lead to a more efficient detector side-channel-free QKD system. We analyze the security of DDI-QKD and elucidate its security assumptions. We find thatmore » DDI-QKD is not equivalent to MDI-QKD, but its security can be demonstrated with reasonable assumptions. On the more practical side, we consider the feasibility of DDI-QKD and present a fast experimental demonstration (clocked at 625 MHz), capable of secret key exchange up to more than 90 km.« less
Evaluation of global onshore wind energy potential and generation costs.
Zhou, Yuyu; Luckow, Patrick; Smith, Steven J; Clarke, Leon
2012-07-17
In this study, we develop an updated global estimate of onshore wind energy potential using reanalysis wind speed data, along with updated wind turbine technology performance, land suitability factors, cost assumptions, and explicit consideration of transmission distance in the calculation of transmission costs. We find that wind has the potential to supply a significant portion of the world energy needs, although this potential varies substantially by region and with assumptions such as on what types of land can be used to site wind farms. Total global economic wind potential under central assumptions, that is, intermediate between optimistic and pessimistic, is estimated to be approximately 119.5 petawatt hours per year (13.6 TW) at less than 9 cents/kWh. A sensitivity analysis of eight key parameters is presented. Wind potential is sensitive to a number of input parameters, particularly wind speed (varying by -70% to +450% at less than 9 cents/kWh), land suitability (by -55% to +25%), turbine density (by -60% to +80%), and cost and financing options (by -20% to +200%), many of which have important policy implications. As a result of sensitivities studied here we suggest that further research intended to inform wind supply curve development focus not purely on physical science, such as better resolved wind maps, but also on these less well-defined factors, such as land-suitability, that will also have an impact on the long-term role of wind power.
Feature inference with uncertain categorization: Re-assessing Anderson's rational model.
Konovalova, Elizaveta; Le Mens, Gaël
2017-09-18
A key function of categories is to help predictions about unobserved features of objects. At the same time, humans are often in situations where the categories of the objects they perceive are uncertain. In an influential paper, Anderson (Psychological Review, 98(3), 409-429, 1991) proposed a rational model for feature inferences with uncertain categorization. A crucial feature of this model is the conditional independence assumption-it assumes that the within category feature correlation is zero. In prior research, this model has been found to provide a poor fit to participants' inferences. This evidence is restricted to task environments inconsistent with the conditional independence assumption. Currently available evidence thus provides little information about how this model would fit participants' inferences in a setting with conditional independence. In four experiments based on a novel paradigm and one experiment based on an existing paradigm, we assess the performance of Anderson's model under conditional independence. We find that this model predicts participants' inferences better than competing models. One model assumes that inferences are based on just the most likely category. The second model is insensitive to categories but sensitive to overall feature correlation. The performance of Anderson's model is evidence that inferences were influenced not only by the more likely category but also by the other candidate category. Our findings suggest that a version of Anderson's model which relaxes the conditional independence assumption will likely perform well in environments characterized by within-category feature correlation.
Yamada, Janet; Potestio, Melissa L; Cave, Andrew J; Sharpe, Heather; Johnson, David W; Patey, Andrea M; Presseau, Justin; Grimshaw, Jeremy M
2017-12-20
This study aimed to apply a theory-based approach to identify barriers and enablers to implementing the Alberta Primary Care Asthma Pediatric Pathway (PCAPP) into clinical practice. Phase 1 included an assessment of assumptions underlying the intervention from the perspectives of the developers. Phase 2 determined the perceived barriers and enablers for: 1) primary care physicians' prescribing practices, 2) allied health care professionals' provision of asthma education to parents, and 3) children and parents' adherence to their treatment plans. Interviews were conducted with 35 individuals who reside in Alberta, Canada. Phase 1 included three developers. Phase 2 included 11 primary care physicians, 10 allied health care professionals, and 11 parents of children with asthma. Phase 2 interviews were based on the 14 domains of the Theoretical Domains Framework (TDF). Transcribed interviews were analyzed using a directed content analysis. Key assumptions by the developers about the intervention, and beliefs by others about the barriers and enablers of the targeted behaviors were identified. Eight TDF domains mapped onto the assumptions of the pathway as described by the intervention developers. Interviews with health care professionals and parents identified nine TDF domains that influenced the targeted behaviors: knowledge, skills, beliefs about capabilities, social/professional role and identity, beliefs about consequences, environmental context and resources, behavioral regulation, social influences, and emotions. Barriers and enablers perceived by health care professionals and parents that influenced asthma management will inform the optimization of the PCAPP prior to its evaluation.
10 CFR 436.17 - Establishing energy or water cost data.
Code of Federal Regulations, 2011 CFR
2011-01-01
... escalation rate assumptions under § 436.14. When energy costs begin to accrue at a later time, subtract the... assumptions under § 436.14. When water costs begin to accrue at a later time, subtract the present value of... Methodology and Procedures for Life Cycle Cost Analyses § 436.17 Establishing energy or water cost data. (a...
ERIC Educational Resources Information Center
Sant, Edda; Hanley, Chris
2018-01-01
Teacher education in England now requires that student teachers follow practices that do not undermine "fundamental British values" where these practices are assessed against a set of ethics and behaviour standards. This paper examines the political assumptions underlying pedagogical interpretations about the education of national…
The Search for Effective Algorithms for Recovery from Loss of Separation
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.; Munoz, Cesar A.; Narawicz, Anthony J.
2012-01-01
Our previous work presented an approach for developing high confidence algorithms for recovering aircraft from loss of separation situations. The correctness theorems for the algorithms relied on several key assumptions, namely that state data for all local aircraft is perfectly known, that resolution maneuvers can be achieved instantaneously, and that all aircraft compute resolutions using exactly the same data. Experiments showed that these assumptions were adequate in cases where the aircraft are far away from losing separation, but are insufficient when the aircraft have already lost separation. This paper describes the results of this experimentation and proposes a new criteria specification for loss of separation recovery that preserves the formal safety properties of the previous criteria while overcoming some key limitations. Candidate algorithms that satisfy the new criteria are presented.
Taliotis, Constantinos; Taibi, Emanuele; Howells, Mark; Rogner, Holger; Bazilian, Morgan; Welsch, Manuel
2017-10-01
The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.
The Importance of the Assumption of Uncorrelated Errors in Psychometric Theory
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos
2015-01-01
A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…
Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?
ERIC Educational Resources Information Center
Reardon, Sean F.; Raudenbush, Stephen W.
2011-01-01
The purpose of this paper is to clarify the assumptions that must be met if this--multiple site, multiple mediator--strategy, hereafter referred to as "MSMM," is to identify the average causal effects (ATE) in the populations of interest. The authors' investigation of the assumptions of the multiple-mediator, multiple-site IV model demonstrates…
ERIC Educational Resources Information Center
Stapleton, Lee M.; Garrod, Guy D.
2007-01-01
Using a range of statistical criteria rooted in Information Theory we show that there is little justification for relaxing the equal weights assumption underlying the United Nation's Human Development Index (HDI) even if the true HDI diverges significantly from this assumption. Put differently, the additional model complexity that unequal weights…
Detailed assessment of global transport-energy models’ structures and projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, Sonia; Mishra, Gouri Shankar; Fulton, Lew
This paper focuses on comparing the frameworks and projections from four major global transportation models with considerable transportation technology and behavioral detail. We analyze and compare the modeling frameworks, underlying data, assumptions, intermediate parameters, and projections to identify the sources of divergence or consistency, as well as key knowledge gaps. We find that there are significant differences in the base-year data and key parameters for future projections, especially for developing countries. These include passenger and freight activity, mode shares, vehicle ownership rates, and even energy consumption by mode, particularly for shipping, aviation and trucking. This may be due in partmore » to a lack of previous efforts to do such consistency-checking and “bench-marking.” We find that the four models differ in terms of the relative roles of various mitigation strategies to achieve a 2°C / 450 ppm CO2e target: the economics-based integrated assessment models favor the use of low carbon fuels as the primary mitigation option followed by efficiency improvements, whereas transport-only and expert-based models favor efficiency improvements of vehicles followed by mode shifts. We offer recommendations for future modeling improvements focusing on (1) reducing data gaps; (2) translating the findings from this study into relevant policy implications such as feasibility of current policy goals, additional policy targets needed, regional vs. global reductions, etc.; (3) modeling strata of demographic groups to improve understanding of vehicle ownership levels, travel behavior, and urban vs. rural considerations; and (4) conducting coordinated efforts in aligning input assumptions and historical data, policy analysis, and modeling insights.« less
Empirical Tests of the Assumptions Underlying Models for Foreign Exchange Rates.
1984-03-01
Research Report COs 481 EMPIRICAL TESTS OF THE ASSUMPTIO:IS UNDERLYING MODELS FOR FOREIGN EXCHANGE RATES by P. Brockett B. Golany 00 00 CENTER FOR...Research Report CCS 481 EMPIRICAL TESTS OF THE ASSUMPTIONS UNDERLYING MODELS FOR FOREIGN EXCHANGE RATES by P. Brockett B. Golany March 1984...applying these tests to the U.S. dollar to Japanese Yen foreign exchange rates . Conclusions and discussion is given in section VI. 1The previous authors
Monroe, Todd; Carter, Michael
2012-09-01
Cognitive scales are used frequently in geriatric research and practice. These instruments are constructed with underlying assumptions that are a part of their validation process. A common measurement scale used in older adults is the Folstein Mini Mental State Exam (MMSE). The MMSE was designed to screen for cognitive impairment and is used often in geriatric research. This paper has three aims. Aim one was to explore four potential threats to validity in the use of the MMSE: (1) administering the exam without meeting the underlying assumptions, (2) not reporting that the underlying assumptions were assessed prior to test administration, (3) use of variable and inconsistent cut-off scores for the determination of presence of cognitive impairment, and (4) failure to adjust the scores based on the demographic characteristics of the tested subject. Aim two was to conduct a literature search to determine if the assumptions of (1) education level assessment, (2) sensory assessment, and (3) language fluency were being met and clearly reported in published research using the MMSE. Aim three was to provide recommendations to minimalize threats to validity in research studies that use cognitive scales, such as the MMSE. We found inconsistencies in published work in reporting whether or not subjects meet the assumptions that underlie a reliable and valid MMSE score. These inconsistencies can pose threats to the reliability of exam results. Fourteen of the 50 studies reviewed reported inclusion of all three of these assumptions. Inconsistencies in reporting the inclusion of the underlying assumptions for a reliable score could mean that subjects were not appropriate to be tested by use of the MMSE or that an appropriate test administration of the MMSE was not clearly reported. Thus, the research literature could have threats to both validity and reliability based on misuse of or improper reported use of the MMSE. Six recommendations are provided to minimalize these threats in future research.
Evolution of Requirements and Assumptions for Future Exploration Missions
NASA Technical Reports Server (NTRS)
Anderson, Molly; Sargusingh, Miriam; Perry, Jay
2017-01-01
NASA programs are maturing technologies, systems, and architectures to enabling future exploration missions. To increase fidelity as technologies mature, developers must make assumptions that represent the requirements of a future program. Multiple efforts have begun to define these requirements, including team internal assumptions, planning system integration for early demonstrations, and discussions between international partners planning future collaborations. For many detailed life support system requirements, existing NASA documents set limits of acceptable values, but a future vehicle may be constrained in other ways, and select a limited range of conditions. Other requirements are effectively set by interfaces or operations, and may be different for the same technology depending on whether the hard-ware is a demonstration system on the International Space Station, or a critical component of a future vehicle. This paper highlights key assumptions representing potential life support requirements and explanations of the driving scenarios, constraints, or other issues that drive them.
Increased costs to US pavement infrastructure from future temperature rise
NASA Astrophysics Data System (ADS)
Underwood, B. Shane; Guido, Zack; Gudipudi, Padmini; Feinberg, Yarden
2017-10-01
Roadway design aims to maximize functionality, safety, and longevity. The materials used for construction, however, are often selected on the assumption of a stationary climate. Anthropogenic climate change may therefore result in rapid infrastructure failure and, consequently, increased maintenance costs, particularly for paved roads where temperature is a key determinant for material selection. Here, we examine the economic costs of projected temperature changes on asphalt roads across the contiguous United States using an ensemble of 19 global climate models forced with RCP 4.5 and 8.5 scenarios. Over the past 20 years, stationary assumptions have resulted in incorrect material selection for 35% of 799 observed locations. With warming temperatures, maintaining the standard practice for material selection is estimated to add approximately US$13.6, US$19.0 and US$21.8 billion to pavement costs by 2010, 2040 and 2070 under RCP4.5, respectively, increasing to US$14.5, US$26.3 and US$35.8 for RCP8.5. These costs will disproportionately affect local municipalities that have fewer resources to mitigate impacts. Failing to update engineering standards of practice in light of climate change therefore significantly threatens pavement infrastructure in the United States.
Farms, Families, and Markets: New Evidence on Completeness of Markets in Agricultural Settings
LaFave, Daniel; Thomas, Duncan
2016-01-01
The farm household model has played a central role in improving the understanding of small-scale agricultural households and non-farm enterprises. Under the assumptions that all current and future markets exist and that farmers treat all prices as given, the model simplifies households’ simultaneous production and consumption decisions into a recursive form in which production can be treated as independent of preferences of household members. These assumptions, which are the foundation of a large literature in labor and development, have been tested and not rejected in several important studies including Benjamin (1992). Using multiple waves of longitudinal survey data from Central Java, Indonesia, this paper tests a key prediction of the recursive model: demand for farm labor is unrelated to the demographic composition of the farm household. The prediction is unambiguously rejected. The rejection cannot be explained by contamination due to unobserved heterogeneity that is fixed at the farm level, local area shocks or farm-specific shocks that affect changes in household composition and farm labor demand. We conclude that the recursive form of the farm household model is not consistent with the data. Developing empirically tractable models of farm households when markets are incomplete remains an important challenge. PMID:27688430
Embracing chaos and complexity: a quantum change for public health.
Resnicow, Kenneth; Page, Scott E
2008-08-01
Public health research and practice have been guided by a cognitive, rational paradigm where inputs produce linear, predictable changes in outputs. However, the conceptual and statistical assumptions underlying this paradigm may be flawed. In particular, this perspective does not adequately account for nonlinear and quantum influences on human behavior. We propose that health behavior change is better understood through the lens of chaos theory and complex adaptive systems. Key relevant principles include that behavior change (1) is often a quantum event; (2) can resemble a chaotic process that is sensitive to initial conditions, highly variable, and difficult to predict; and (3) occurs within a complex adaptive system with multiple components, where results are often greater than the sum of their parts.
'Needs only' Analysis in Linguistic Ontogeny and Phylogeny
NASA Astrophysics Data System (ADS)
Wray, Alison
Recently, linguists from several quarters have begun to unpack some of the assumptions and claims made in linguistics over the last 40 years, opening up new possibilities for synergies between linguistic theory and the variety of fields that engage with it. A key point of exploration is the relationship between external manifestations of language and the underlying mental model that produces and understands them. To what extent does it remain reasonable to argue that all humans 'know' certain things about language, even if they never demonstrate that knowledge? What is the status of knowledge that is only stimulated into expression by particular cultural input? Many have asked whether the human's linguistic behaviour can be explained with recourse to less innate knowledge than Chomskian models traditionally assume.
An analytic formula for the supercluster mass function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seunghwan; Lee, Jounghun, E-mail: slim@astro.umass.edu, E-mail: jounghun@astro.snu.ac.kr
2014-03-01
We present an analytic formula for the supercluster mass function, which is constructed by modifying the extended Zel'dovich model for the halo mass function. The formula has two characteristic parameters whose best-fit values are determined by fitting to the numerical results from N-body simulations for the standard ΛCDM cosmology. The parameters are found to be independent of redshifts and robust against variation of the key cosmological parameters. Under the assumption that the same formula for the supercluster mass function is valid for non-standard cosmological models, we show that the relative abundance of the rich superclusters should be a powerful indicatormore » of any deviation of the real universe from the prediction of the standard ΛCDM model.« less
Numerical test of the Edwards conjecture shows that all packings are equally probable at jamming
NASA Astrophysics Data System (ADS)
Martiniani, Stefano; Schrenk, K. Julian; Ramola, Kabir; Chakraborty, Bulbul; Frenkel, Daan
2017-09-01
In the late 1980s, Sam Edwards proposed a possible statistical-mechanical framework to describe the properties of disordered granular materials. A key assumption underlying the theory was that all jammed packings are equally likely. In the intervening years it has never been possible to test this bold hypothesis directly. Here we present simulations that provide direct evidence that at the unjamming point, all packings of soft repulsive particles are equally likely, even though generically, jammed packings are not. Typically, jammed granular systems are observed precisely at the unjamming point since grains are not very compressible. Our results therefore support Edwards’ original conjecture. We also present evidence that at unjamming the configurational entropy of the system is maximal.
Flood return level analysis of Peaks over Threshold series under changing climate
NASA Astrophysics Data System (ADS)
Li, L.; Xiong, L.; Hu, T.; Xu, C. Y.; Guo, S.
2016-12-01
Obtaining insights into future flood estimation is of great significance for water planning and management. Traditional flood return level analysis with the stationarity assumption has been challenged by changing environments. A method that takes into consideration the nonstationarity context has been extended to derive flood return levels for Peaks over Threshold (POT) series. With application to POT series, a Poisson distribution is normally assumed to describe the arrival rate of exceedance events, but this distribution assumption has at times been reported as invalid. The Negative Binomial (NB) distribution is therefore proposed as an alternative to the Poisson distribution assumption. Flood return levels were extrapolated in nonstationarity context for the POT series of the Weihe basin, China under future climate scenarios. The results show that the flood return levels estimated under nonstationarity can be different with an assumption of Poisson and NB distribution, respectively. The difference is found to be related to the threshold value of POT series. The study indicates the importance of distribution selection in flood return level analysis under nonstationarity and provides a reference on the impact of climate change on flood estimation in the Weihe basin for the future.
The "7 Keys of the Dragon": An E-Learning Gamelike Environment for Albanian and Russian
ERIC Educational Resources Information Center
Revithiadou, Anthi; Kourtis-Kazoullis, Vasilia; Soukalopoulou, Maria; Konstantoudakis, Konstantinos; Zarras, Christos; Pelesoglou, Nestoras
2014-01-01
In this article we report on the development of an interactive open source extensible software, dubbed "The 7 Keys of the Dragon," for the teaching/learning of Albanian and Russian to students (9-12 years old) with the respective languages as their heritage languages. Based on the assumption that games in language learning are associated…
Security of six-state quantum key distribution protocol with threshold detectors
Kato, Go; Tamaki, Kiyoshi
2016-01-01
The security of quantum key distribution (QKD) is established by a security proof, and the security proof puts some assumptions on the devices consisting of a QKD system. Among such assumptions, security proofs of the six-state protocol assume the use of photon number resolving (PNR) detector, and as a result the bit error rate threshold for secure key generation for the six-state protocol is higher than that for the BB84 protocol. Unfortunately, however, this type of detector is demanding in terms of technological level compared to the standard threshold detector, and removing the necessity of such a detector enhances the feasibility of the implementation of the six-state protocol. Here, we develop the security proof for the six-state protocol and show that we can use the threshold detector for the six-state protocol. Importantly, the bit error rate threshold for the key generation for the six-state protocol (12.611%) remains almost the same as the one (12.619%) that is derived from the existing security proofs assuming the use of PNR detectors. This clearly demonstrates feasibility of the six-state protocol with practical devices. PMID:27443610
Assumptions of Statistical Tests: What Lies Beneath.
Jupiter, Daniel C
We have discussed many statistical tests and tools in this series of commentaries, and while we have mentioned the underlying assumptions of the tests, we have not explored them in detail. We stop to look at some of the assumptions of the t-test and linear regression, justify and explain them, mention what can go wrong when the assumptions are not met, and suggest some solutions in this case. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Patterns and Assumptions: The Keys to Understanding Organizational Cultures.
1982-06-01
assumption of the Navaho Indians: 8 Experience shows that if one asks Navaho Indians about witchcraft , more than 70 per cent will give almost identical...verbal responses. The replies will vary only in this fashion: "Who told you to talk to me about witchcraft ?" "Who said that I knew anything about... witchcraft ?" "Why do you come to ask about this--who told--pou I knew about it?" Here one has a behavioral pattern of the explicit culture, for the structure
Behavioral health at-risk contracting--a rate development and financial reporting guide.
Zinser, G R
1994-01-01
The process of developing rates for behavioral capitation contracts can seem mysterious and intimidating. The following article explains several key features of the method used to develop capitation rates. These include: (1) a basic understanding of the mechanics of rate calculation; (2) awareness of the variables to be considered and assumptions to be made; (3) a source of information to use as a basis for these assumptions; and (4) a system to collect detailed actual experience data.
The Infeasibility of Experimental Quantification of Life-Critical Software Reliability
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Finelli, George B.
1991-01-01
This paper affirms that quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The key assumption of software fault tolerance|separately programmed versions fail independently|is shown to be problematic. This assumption cannot be justified by experimentation in the ultra-reliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multi-version software experiments support this affirmation.
Schmidt, Joshua H; Wilson, Tammy L; Thompson, William L; Reynolds, Joel H
2017-07-01
Obtaining useful estimates of wildlife abundance or density requires thoughtful attention to potential sources of bias and precision, and it is widely understood that addressing incomplete detection is critical to appropriate inference. When the underlying assumptions of sampling approaches are violated, both increased bias and reduced precision of the population estimator may result. Bear ( Ursus spp.) populations can be difficult to sample and are often monitored using mark-recapture distance sampling (MRDS) methods, although obtaining adequate sample sizes can be cost prohibitive. With the goal of improving inference, we examined the underlying methodological assumptions and estimator efficiency of three datasets collected under an MRDS protocol designed specifically for bears. We analyzed these data using MRDS, conventional distance sampling (CDS), and open-distance sampling approaches to evaluate the apparent bias-precision tradeoff relative to the assumptions inherent under each approach. We also evaluated the incorporation of informative priors on detection parameters within a Bayesian context. We found that the CDS estimator had low apparent bias and was more efficient than the more complex MRDS estimator. When combined with informative priors on the detection process, precision was increased by >50% compared to the MRDS approach with little apparent bias. In addition, open-distance sampling models revealed a serious violation of the assumption that all bears were available to be sampled. Inference is directly related to the underlying assumptions of the survey design and the analytical tools employed. We show that for aerial surveys of bears, avoidance of unnecessary model complexity, use of prior information, and the application of open population models can be used to greatly improve estimator performance and simplify field protocols. Although we focused on distance sampling-based aerial surveys for bears, the general concepts we addressed apply to a variety of wildlife survey contexts.
Bartnik’s splitting conjecture and Lorentzian Busemann function
NASA Astrophysics Data System (ADS)
Amini, Roya; Sharifzadeh, Mehdi; Bahrampour, Yousof
2018-05-01
In 1988 Bartnik posed the splitting conjecture about the cosmological space-time. This conjecture has been proved by several people, with different approaches and by using some additional assumptions such as ‘S-ray condition’ and ‘level set condition’. It is known that the ‘S-ray condition’ yields the ‘level set condition’. We have proved that the two are indeed equivalent, by giving a different proof under the assumption of the ‘level set condition’. In addition, we have shown several properties of the cosmological space-time, under the presence of the ‘level set condition’. Finally we have provided a proof of the conjecture under a different assumption on the cosmological space-time. But we first prove some results without the timelike convergence condition which help us to state our proofs.
Latimer, Nicholas R; Abrams, Keith R; Lambert, Paul C; Crowther, Michael J; Wailoo, Allan J; Morden, James P; Akehurst, Ron L; Campbell, Michael J
2014-04-01
Treatment switching commonly occurs in clinical trials of novel interventions in the advanced or metastatic cancer setting. However, methods to adjust for switching have been used inconsistently and potentially inappropriately in health technology assessments (HTAs). We present recommendations on the use of methods to adjust survival estimates in the presence of treatment switching in the context of economic evaluations. We provide background on the treatment switching issue and summarize methods used to adjust for it in HTAs. We discuss the assumptions and limitations associated with adjustment methods and draw on results of a simulation study to make recommendations on their use. We demonstrate that methods used to adjust for treatment switching have important limitations and often produce bias in realistic scenarios. We present an analysis framework that aims to increase the probability that suitable adjustment methods can be identified on a case-by-case basis. We recommend that the characteristics of clinical trials, and the treatment switching mechanism observed within them, should be considered alongside the key assumptions of the adjustment methods. Key assumptions include the "no unmeasured confounders" assumption associated with the inverse probability of censoring weights (IPCW) method and the "common treatment effect" assumption associated with the rank preserving structural failure time model (RPSFTM). The limitations associated with switching adjustment methods such as the RPSFTM and IPCW mean that they are appropriate in different scenarios. In some scenarios, both methods may be prone to bias; "2-stage" methods should be considered, and intention-to-treat analyses may sometimes produce the least bias. The data requirements of adjustment methods also have important implications for clinical trialists.
Global warming and extinctions of endemic species from biodiversity hotspots.
Malcolm, Jay R; Liu, Canran; Neilson, Ronald P; Hansen, Lara; Hannah, Lee
2006-04-01
Global warming is a key threat to biodiversity, but few researchers have assessed the magnitude of this threat at the global scale. We used major vegetation types (biomes) as proxies for natural habitats and, based on projected future biome distributions under doubled-CO2 climates, calculated changes in habitat areas and associated extinctions of endemic plant and vertebrate species in biodiversity hotspots. Because of numerous uncertainties in this approach, we undertook a sensitivity analysis of multiple factors that included (1) two global vegetation models, (2) different numbers of biome classes in our biome classification schemes, (3) different assumptions about whether species distributions were biome specific or not, and (4) different migration capabilities. Extinctions were calculated using both species-area and endemic-area relationships. In addition, average required migration rates were calculated for each hotspot assuming a doubled-CO2 climate in 100 years. Projected percent extinctions ranged from <1 to 43% of the endemic biota (average 11.6%), with biome specificity having the greatest influence on the estimates, followed by the global vegetation model and then by migration and biome classification assumptions. Bootstrap comparisons indicated that effects on hotpots as a group were not significantly different from effects on random same-biome collections of grid cells with respect to biome change or migration rates; in some scenarios, however, botspots exhibited relatively high biome change and low migration rates. Especially vulnerable hotspots were the Cape Floristic Region, Caribbean, Indo-Burma, Mediterranean Basin, Southwest Australia, and Tropical Andes, where plant extinctions per hotspot sometimes exceeded 2000 species. Under the assumption that projected habitat changes were attained in 100 years, estimated global-warming-induced rates of species extinctions in tropical hotspots in some cases exceeded those due to deforestation, supporting suggestions that global warming is one of the most serious threats to the planet's biodiversity.
Emissions Scenario Portal for Visualization of Low Carbon Pathways
NASA Astrophysics Data System (ADS)
Friedrich, J.; Hennig, R. J.; Mountford, H.; Altamirano, J. C.; Ge, M.; Fransen, T.
2016-12-01
This proposal for a presentation is centered around a new project which is developed collaboratively by the World Resources Institute (WRI), Google Inc., and Deep Decarbonization Pathways Project (DDPP). The project aims to develop an online, open portal, the Emissions Scenario Portal (ESP),to enable users to easily visualize a range of future greenhouse gas emission pathways linked to different scenarios of economic and energy developments, drawing from a variety of modeling tools. It is targeted to users who are not modelling experts, but instead policy analysts or advisors, investment analysts, and similar who draw on modelled scenarios to inform their work, and who can benefit from better access to, and transparency around, the wide range of emerging scenarios on ambitious climate action. The ESP will provide information from scenarios in a visually appealing and easy-to-understand manner that enable these users to recognize the opportunities to reduce GHG emissions, the implications of the different scenarios, and the underlying assumptions. To facilitate the application of the portal and tools in policy dialogues, a series of country-specific and potentially sector-specific workshops with key decision-makers and analysts, supported by relevant analysis, will be organized by the key partners and also in broader collaboration with others who might wish to convene relevant groups around the information. This project will provide opportunities for modelers to increase their outreach and visibility in the public space and to directly interact with key audiences of emissions scenarios, such as policy analysts and advisors. The information displayed on the portal will cover a wide range of indicators, sectors and important scenario characteristics such as macroeconomic information, emission factors and policy as well as technology assumptions in order to facilitate comparison. These indicators have been selected based on existing standards (such as the IIASA AR5 database, the Greenhouse Gas Protocol and accounting literature) and stakeholder consultations. Examples for use cases include: technical advisers for governments NGO/Civil Society advocates Investors and bankers Modelers and academics Business sustainability officers
Goodman, Claire; Davies, Sue L; Gordon, Adam L; Meyer, Julienne; Dening, Tom; Gladman, John R F; Iliffe, Steve; Zubair, Maria; Bowman, Clive; Victor, Christina; Martin, Finbarr C
2015-05-01
To explore what commissioners of care, regulators, providers, and care home residents in England identify as the key mechanisms or components of different service delivery models that support the provision of National Health Service (NHS) provision to independent care homes. Qualitative, semistructured interviews with a purposive sample of people with direct experience of commissioning, providing, and regulating health care provision in care homes and care home residents. Data from interviews were augmented by a secondary analysis of previous interviews with care home residents on their personal experience of and priorities for access to health care. Analysis was framed by the assumptions of realist evaluation and drew on the constant comparative method to identify key themes about what is required to achieve quality health care provision to care homes and resident health. Participants identified 3 overlapping approaches to the provision of NHS that they believed supported access to health care for older people in care homes: (1) Investment in relational working that fostered continuity and shared learning between visiting NHS staff and care home staff, (2) the provision of age-appropriate clinical services, and (3) governance arrangements that used contractual and financial incentives to specify a minimum service that care homes should receive. The 3 approaches, and how they were typified as working, provide a rich picture of the stakeholder perspectives and the underlying assumptions about how service delivery models should work with care homes. The findings inform how evidence on effective working in care homes will be interrogated to identify how different approaches, or specifically key elements of those approaches, achieve different health-related outcomes in different situations for residents and associated health and social care organizations. Copyright © 2015 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fürstenau Togashi, Henrique; Prentice, Iain Colin; Atkin, Owen K.; Macfarlane, Craig; Prober, Suzanne M.; Bloomfield, Keith J.; Evans, Bradley John
2018-06-01
Ecosystem models commonly assume that key photosynthetic traits, such as carboxylation capacity measured at a standard temperature, are constant in time. The temperature responses of modelled photosynthetic or respiratory rates then depend entirely on enzyme kinetics. Optimality considerations, however, suggest this assumption may be incorrect. The coordination hypothesis
(that Rubisco- and electron-transport-limited rates of photosynthesis are co-limiting under typical daytime conditions) predicts, instead, that carboxylation (Vcmax) capacity should acclimate so that it increases somewhat with growth temperature but less steeply than its instantaneous response, implying that Vcmax when normalized to a standard temperature (e.g. 25 °C) should decline with growth temperature. With additional assumptions, similar predictions can be made for electron-transport capacity (Jmax) and mitochondrial respiration in the dark (Rdark). To explore these hypotheses, photosynthetic measurements were carried out on woody species during the warm and the cool seasons in the semi-arid Great Western Woodlands, Australia, under broadly similar light environments. A consistent proportionality between Vcmax and Jmax was found across species. Vcmax, Jmax and Rdark increased with temperature in most species, but their values standardized to 25 °C declined. The ci : ca ratio increased slightly with temperature. The leaf N : P ratio was lower in the warm season. The slopes of the relationships between log-transformed Vcmax and Jmax and temperature were close to values predicted by the coordination hypothesis but shallower than those predicted by enzyme kinetics.
Cullen, Patricia; Clapham, Kathleen; Byrne, Jake; Hunter, Kate; Senserrick, Teresa; Keay, Lisa; Ivers, Rebecca
2016-08-01
Evidence indicates that Aboriginal people are underrepresented among driver licence holders in New South Wales, which has been attributed to licensing barriers for Aboriginal people. The Driving Change program was developed to provide culturally responsive licensing services that engage Aboriginal communities and build local capacity. This paper outlines the formative evaluation of the program, including logic model construction and exploration of contextual factors. Purposive sampling was used to identify key informants (n=12) from a consultative committee of key stakeholders and program staff. Semi-structured interviews were transcribed and thematically analysed. Data from interviews informed development of the logic model. Participants demonstrated high level of support for the program and reported that it filled an important gap. The program context revealed systemic barriers to licensing that were correspondingly targeted by specific program outputs in the logic model. Addressing underlying assumptions of the program involved managing local capacity and support to strengthen implementation. This formative evaluation highlights the importance of exploring program context as a crucial first step in logic model construction. The consultation process assisted in clarifying program goals and ensuring that the program was responding to underlying systemic factors that contribute to inequitable licensing access for Aboriginal people. Copyright © 2016 Elsevier Ltd. All rights reserved.
Community engagement as conflict prevention: Understanding the social license to operate
NASA Astrophysics Data System (ADS)
Knih, Dejana
This thesis examines community engagement as a form of conflict prevention in order to obtain the social license to operate (SLO) in Alberta's oil and gas industry. It does this by answering the question: what are the key elements of the Social License to Operate and how can these elements be applied to community engagement/consultation in a way that prevents conflicts in Alberta's oil and gas industry? The underlying assumption of this thesis is that building good relationships and working collaboratively functions as a form of conflict prevention and that this in turn leads to the SLO. This thesis outlines the key features of both successful community engagement and of the SLO, to provide a guideline for what is needed to obtain the SLO. Data was collected from semi-structured interviews and through a literature review. The data analysis concluded that there are direct parallels between the key elements of effective community engagement and the key elements of the SLO as identified in the interviews. These parallels are: knowing the community, addressing community needs, corporate social responsibility, relationship building, follow through and evidence for what has been done, executive buy-in, excellent communication, and open dialogue, all within a process which is principled (there is trust, understanding, transparency and respect), inclusive, dynamic, flexible, ongoing, and long-term. Moreover, the key elements of effective community engagement and of the SLO identified in the interviews also overlapped with those found in the literature review, with only one exception. The literature review explicitly named early involvement as a key element of both effective community engagement and the SLO, whereas the interview participants only explicitly indicated it as a key factor of community engagement and implied it to be a key element of the SLO.
Deterministic MDI QKD with two secret bits per shared entangled pair
NASA Astrophysics Data System (ADS)
Zebboudj, Sofia; Omar, Mawloud
2018-03-01
Although quantum key distribution schemes have been proven theoretically secure, they are based on assumptions about the devices that are not yet satisfied with today's technology. The measurement-device-independent scheme has been proposed to shorten the gap between theory and practice by removing all detector side-channel attacks. On the other hand, two-way quantum key distribution schemes have been proposed to raise the secret key generation rate. In this paper, we propose a new quantum key distribution scheme able to achieve a relatively high secret key generation rate based on two-way quantum key distribution that also inherits the robustness of the measurement-device-independent scheme against detector side-channel attacks.
Server-Controlled Identity-Based Authenticated Key Exchange
NASA Astrophysics Data System (ADS)
Guo, Hua; Mu, Yi; Zhang, Xiyong; Li, Zhoujun
We present a threshold identity-based authenticated key exchange protocol that can be applied to an authenticated server-controlled gateway-user key exchange. The objective is to allow a user and a gateway to establish a shared session key with the permission of the back-end servers, while the back-end servers cannot obtain any information about the established session key. Our protocol has potential applications in strong access control of confidential resources. In particular, our protocol possesses the semantic security and demonstrates several highly-desirable security properties such as key privacy and transparency. We prove the security of the protocol based on the Bilinear Diffie-Hellman assumption in the random oracle model.
Baseline projections for Latin America: base-year assumptions, key drivers and greenhouse emissions
van Ruijven, Bas J.; Daenzer, Katie; Fisher-Vanden, Karen; ...
2016-02-14
This article provides an overview of the base-year assumptions and core baseline projections for the set of models participating in the LAMP and CLIMACAP projects. Here we present the range in core baseline projections for Latin America, and identify key differences between model projections including how these projections compare to historic trends. We find relatively large differences across models in base year assumptions related to population, GDP, energy and CO 2 emissions due to the use of different data sources, but also conclude that this does not influence the range of projections. We find that population and GDP projections acrossmore » models span a broad range, comparable to the range represented by the set of Shared Socioeconomic Pathways (SSPs). Kaya-factor decomposition indicates that the set of core baseline scenarios mirrors trends experienced over the past decades. Emissions in Latin America are projected to rise as result of GDP and population growth and a minor shift in the energy mix toward fossil fuels. Most scenarios assume a somewhat higher GDP growth than historically observed and continued decline of population growth. Minor changes in energy intensity or energy mix are projected over the next few decades.« less
Cognitive neuroenhancement: false assumptions in the ethical debate.
Heinz, Andreas; Kipke, Roland; Heimann, Hannah; Wiesing, Urban
2012-06-01
The present work critically examines two assumptions frequently stated by supporters of cognitive neuroenhancement. The first, explicitly methodological, assumption is the supposition of effective and side effect-free neuroenhancers. However, there is an evidence-based concern that the most promising drugs currently used for cognitive enhancement can be addictive. Furthermore, this work describes why the neuronal correlates of key cognitive concepts, such as learning and memory, are so deeply connected with mechanisms implicated in the development and maintenance of addictive behaviour so that modification of these systems may inevitably run the risk of addiction to the enhancing drugs. Such a potential risk of addiction could only be falsified by in-depth empirical research. The second, implicit, assumption is that research on neuroenhancement does not pose a serious moral problem. However, the potential for addiction, along with arguments related to research ethics and the potential social impact of neuroenhancement, could invalidate this assumption. It is suggested that ethical evaluation needs to consider the empirical data as well as the question of whether and how such empirical knowledge can be obtained.
Impact of actuarial assumptions on pension costs: A simulation analysis
NASA Astrophysics Data System (ADS)
Yusof, Shaira; Ibrahim, Rose Irnawaty
2013-04-01
This study investigates the sensitivity of pension costs to changes in the underlying assumptions of a hypothetical pension plan in order to gain a perspective on the relative importance of the various actuarial assumptions via a simulation analysis. Simulation analyses are used to examine the impact of actuarial assumptions on pension costs. There are two actuarial assumptions will be considered in this study which are mortality rates and interest rates. To calculate pension costs, Accrued Benefit Cost Method, constant amount (CA) modification, constant percentage of salary (CS) modification are used in the study. The mortality assumptions and the implied mortality experience of the plan can potentially have a significant impact on pension costs. While for interest rate assumptions, it is inversely related to the pension costs. Results of the study have important implications for analyst of pension costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogen, K.T.; Conrado, C.L.; Robison, W.L.
A detailed analysis of uncertainty and interindividual variability in estimated doses was conducted for a rehabilitation scenario for Bikini Island at Bikini Atoll, in which the top 40 cm of soil would be removed in the housing and village area, and the rest of the island is treated with potassium fertilizer, prior to an assumed resettlement date of 1999. Predicted doses were considered for the following fallout-related exposure pathways: ingested Cesium-137 and Strontium-90, external gamma exposure, and inhalation and ingestion of Americium-241 + Plutonium-239+240. Two dietary scenarios were considered: (1) imported foods are available (IA), and (2) imported foods aremore » unavailable (only local foods are consumed) (IUA). Corresponding calculations of uncertainty in estimated population-average dose showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to uncertainty in this dose are estimated to be approximately 2-fold higher and lower than its population-average value, respectively (under both IA and IUA assumptions). Corresponding calculations of interindividual variability in the expected value of dose with respect to uncertainty showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to interindividual variability in this dose are estimated to be approximately 2-fold higher and lower than its expected value, respectively (under both IA and IUA assumptions). For reference, the expected values of population-average dose at age 70 were estimated to be 1.6 and 5.2 cSv under the IA and IUA dietary assumptions, respectively. Assuming that 200 Bikini resettlers would be exposed to local foods (under both IA and IUA assumptions), the maximum 1-y dose received by any Bikini resident is most likely to be approximately 2 and 8 mSv under the IA and IUA assumptions, respectively.« less
The Robustness of LOGIST and BILOG IRT Estimation Programs to Violations of Local Independence.
ERIC Educational Resources Information Center
Ackerman, Terry A.
One of the important underlying assumptions of all item response theory (IRT) models is that of local independence. This assumption requires that the response to an item on a test not be influenced by the response to any other items. This assumption is often taken for granted, with little or no scrutiny of the response process required to answer…
ERIC Educational Resources Information Center
Johnstone, D. Bruce
As background to the National Dialogue on Student Financial Aid, this essay discusses the fundamental assumptions and aims that underlie the principles and policies of federal financial aid to students. These eight assumptions and aims are explored: (1) higher education is the province of states, and not of the federal government; (2) the costs of…
NASA Astrophysics Data System (ADS)
Solazzi, Santiago G.; Rubino, J. Germán; Müller, Tobias M.; Milani, Marco; Guarracino, Luis; Holliger, Klaus
2016-11-01
Wave-induced fluid flow (WIFF) due to the presence of mesoscopic heterogeneities is considered as one of the main seismic attenuation mechanisms in the shallower parts of the Earth's crust. For this reason, several models have been developed to quantify seismic attenuation in the presence of heterogeneities of varying complexity, ranging from periodically layered media to rocks containing fractures and highly irregular distributions of fluid patches. Most of these models are based on Biot's theory of poroelasticity and make use of the assumption that the upscaled counterpart of a heterogeneous poroelastic medium can be represented by a homogeneous viscoelastic solid. Under this dynamic-equivalent viscoelastic medium (DEVM) assumption, attenuation is quantified in terms of the ratio of the imaginary and real parts of a frequency-dependent, complex-valued viscoelastic modulus. Laboratory measurements on fluid-saturated rock samples also rely on this DEVM assumption when inferring attenuation from the phase shift between the applied stress and the resulting strain. However, whether it is correct to use an effective viscoelastic medium to represent the attenuation arising from WIFF at mesoscopic scales in heterogeneous poroelastic media remains largely unexplored. In this work, we present an alternative approach to estimate seismic attenuation due to WIFF. It is fully rooted in the framework of poroelasticity and is based on the quantification of the dissipated power and stored strain energy resulting from numerical oscillatory relaxation tests. We employ this methodology to compare different definitions of the inverse quality factor for a set of pertinent scenarios, including patchy saturation and fractured rocks. This numerical analysis allows us to verify the correctness of the DEVM assumption in the presence of different kinds of heterogeneities. The proposed methodology has the key advantage of providing the local contributions of energy dissipation to the overall seismic attenuation, information that is not available when attenuation is retrieved from methods based on the DEVM assumption. Using the local attenuation contributions we provide further insights into the WIFF mechanism for randomly distributed fluid patches and explore the accumulation of energy dissipation in the vicinity of fractures.
Ways to Help Divorced Parents Communicate Better on Behalf of Their Children.
ERIC Educational Resources Information Center
Marston, Stephanie
1994-01-01
Five keys to effective communication for divorced parents raising their children include being clear about what they want, keeping it simple, being businesslike, avoiding assumptions, and staying in the present. (SM)
Detector-device-independent quantum key distribution: Security analysis and fast implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boaron, Alberto; Korzh, Boris; Boso, Gianluca
One of the most pressing issues in quantum key distribution (QKD) is the problem of detector side-channel attacks. To overcome this problem, researchers proposed an elegant “time-reversal” QKD protocol called measurement-device-independent QKD (MDI-QKD), which is based on time-reversed entanglement swapping. However, MDI-QKD is more challenging to implement than standard point-to-point QKD. Recently, an intermediary QKD protocol called detector-device-independent QKD (DDI-QKD) has been proposed to overcome the drawbacks of MDI-QKD, with the hope that it would eventually lead to a more efficient detector side-channel-free QKD system. Here, we analyze the security of DDI-QKD and elucidate its security assumptions. We find thatmore » DDI-QKD is not equivalent to MDI-QKD, but its security can be demonstrated with reasonable assumptions. On the more practical side, we consider the feasibility of DDI-QKD and present a fast experimental demonstration (clocked at 625 MHz), capable of secret key exchange up to more than 90 km.« less
Completely device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Aguilar, Edgar A.; Ramanathan, Ravishankar; Kofler, Johannes; Pawłowski, Marcin
2016-08-01
Quantum key distribution (QKD) is a provably secure way for two distant parties to establish a common secret key, which then can be used in a classical cryptographic scheme. Using quantum entanglement, one can reduce the necessary assumptions that the parties have to make about their devices, giving rise to device-independent QKD (DIQKD). However, in all existing protocols to date the parties need to have an initial (at least partially) random seed as a resource. In this work, we show that this requirement can be dropped. Using recent advances in the fields of randomness amplification and randomness expansion, we demonstrate that it is sufficient for the message the parties want to communicate to be (partially) unknown to the adversaries—an assumption without which any type of cryptography would be pointless to begin with. One party can use her secret message to locally generate a secret sequence of bits, which can then be openly used by herself and the other party in a DIQKD protocol. Hence our work reduces the requirements needed to perform secure DIQKD and establish safe communication.
Trustworthiness of detectors in quantum key distribution with untrusted detectors
Qi, Bing
2015-02-25
Measurement-device-independent quantum key distribution (MDI-QKD) protocol has been demonstrated as a viable solution to detector side-channel attacks. One of the main advantages of MDI-QKD is that the security can be proved without making any assumptions about how the measurement device works. The price to pay is the relatively low secure key rate comparing with conventional quantum key distribution (QKD), such as the decoy-state BB84 protocol. Recently a new QKD protocol, aiming at bridging the strong security of MDI-QKD with the high e ciency of conventional QKD, has been proposed. In this protocol, the legitimate receiver employs a trusted linear opticsmore » network to encode information on photons received from an insecure quantum channel, and then performs a Bell state measurement (BSM) using untrusted detectors. One crucial assumption made in most of these studies is that the untrusted BSM located inside the receiver's laboratory cannot send any unwanted information to the outside. Here in this paper, we show that if the BSM is completely untrusted, a simple scheme would allow the BSM to send information to the outside. Combined with Trojan horse attacks, this scheme could allow Eve to gain information of the quantum key without being detected. Ultimately, to prevent the above attack, either countermeasures to Trojan horse attacks or some trustworthiness to the "untrusted" BSM device is required.« less
A Stochastic Framework for Modeling the Population Dynamics of Convective Clouds
Hagos, Samson; Feng, Zhe; Plant, Robert S.; ...
2018-02-20
A stochastic prognostic framework for modeling the population dynamics of convective clouds and representing them in climate models is proposed. The framework follows the nonequilibrium statistical mechanical approach to constructing a master equation for representing the evolution of the number of convective cells of a specific size and their associated cloud-base mass flux, given a large-scale forcing. In this framework, referred to as STOchastic framework for Modeling Population dynamics of convective clouds (STOMP), the evolution of convective cell size is predicted from three key characteristics of convective cells: (i) the probability of growth, (ii) the probability of decay, and (iii)more » the cloud-base mass flux. STOMP models are constructed and evaluated against CPOL radar observations at Darwin and convection permitting model (CPM) simulations. Multiple models are constructed under various assumptions regarding these three key parameters and the realisms of these models are evaluated. It is shown that in a model where convective plumes prefer to aggregate spatially and the cloud-base mass flux is a nonlinear function of convective cell area, the mass flux manifests a recharge-discharge behavior under steady forcing. Such a model also produces observed behavior of convective cell populations and CPM simulated cloud-base mass flux variability under diurnally varying forcing. Finally, in addition to its use in developing understanding of convection processes and the controls on convective cell size distributions, this modeling framework is also designed to serve as a nonequilibrium closure formulations for spectral mass flux parameterizations.« less
A Stochastic Framework for Modeling the Population Dynamics of Convective Clouds
NASA Astrophysics Data System (ADS)
Hagos, Samson; Feng, Zhe; Plant, Robert S.; Houze, Robert A.; Xiao, Heng
2018-02-01
A stochastic prognostic framework for modeling the population dynamics of convective clouds and representing them in climate models is proposed. The framework follows the nonequilibrium statistical mechanical approach to constructing a master equation for representing the evolution of the number of convective cells of a specific size and their associated cloud-base mass flux, given a large-scale forcing. In this framework, referred to as STOchastic framework for Modeling Population dynamics of convective clouds (STOMP), the evolution of convective cell size is predicted from three key characteristics of convective cells: (i) the probability of growth, (ii) the probability of decay, and (iii) the cloud-base mass flux. STOMP models are constructed and evaluated against CPOL radar observations at Darwin and convection permitting model (CPM) simulations. Multiple models are constructed under various assumptions regarding these three key parameters and the realisms of these models are evaluated. It is shown that in a model where convective plumes prefer to aggregate spatially and the cloud-base mass flux is a nonlinear function of convective cell area, the mass flux manifests a recharge-discharge behavior under steady forcing. Such a model also produces observed behavior of convective cell populations and CPM simulated cloud-base mass flux variability under diurnally varying forcing. In addition to its use in developing understanding of convection processes and the controls on convective cell size distributions, this modeling framework is also designed to serve as a nonequilibrium closure formulations for spectral mass flux parameterizations.
A Stochastic Framework for Modeling the Population Dynamics of Convective Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson; Feng, Zhe; Plant, Robert S.
A stochastic prognostic framework for modeling the population dynamics of convective clouds and representing them in climate models is proposed. The approach used follows the non-equilibrium statistical mechanical approach through a master equation. The aim is to represent the evolution of the number of convective cells of a specific size and their associated cloud-base mass flux, given a large-scale forcing. In this framework, referred to as STOchastic framework for Modeling Population dynamics of convective clouds (STOMP), the evolution of convective cell size is predicted from three key characteristics: (i) the probability of growth, (ii) the probability of decay, and (iii)more » the cloud-base mass flux. STOMP models are constructed and evaluated against CPOL radar observations at Darwin and convection permitting model (CPM) simulations. Multiple models are constructed under various assumptions regarding these three key parameters and the realisms of these models are evaluated. It is shown that in a model where convective plumes prefer to aggregate spatially and mass flux is a non-linear function of convective cell area, mass flux manifests a recharge-discharge behavior under steady forcing. Such a model also produces observed behavior of convective cell populations and CPM simulated mass flux variability under diurnally varying forcing. Besides its use in developing understanding of convection processes and the controls on convective cell size distributions, this modeling framework is also designed to be capable of providing alternative, non-equilibrium, closure formulations for spectral mass flux parameterizations.« less
A Stochastic Framework for Modeling the Population Dynamics of Convective Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson; Feng, Zhe; Plant, Robert S.
A stochastic prognostic framework for modeling the population dynamics of convective clouds and representing them in climate models is proposed. The framework follows the nonequilibrium statistical mechanical approach to constructing a master equation for representing the evolution of the number of convective cells of a specific size and their associated cloud-base mass flux, given a large-scale forcing. In this framework, referred to as STOchastic framework for Modeling Population dynamics of convective clouds (STOMP), the evolution of convective cell size is predicted from three key characteristics of convective cells: (i) the probability of growth, (ii) the probability of decay, and (iii)more » the cloud-base mass flux. STOMP models are constructed and evaluated against CPOL radar observations at Darwin and convection permitting model (CPM) simulations. Multiple models are constructed under various assumptions regarding these three key parameters and the realisms of these models are evaluated. It is shown that in a model where convective plumes prefer to aggregate spatially and the cloud-base mass flux is a nonlinear function of convective cell area, the mass flux manifests a recharge-discharge behavior under steady forcing. Such a model also produces observed behavior of convective cell populations and CPM simulated cloud-base mass flux variability under diurnally varying forcing. Finally, in addition to its use in developing understanding of convection processes and the controls on convective cell size distributions, this modeling framework is also designed to serve as a nonequilibrium closure formulations for spectral mass flux parameterizations.« less
Involvement of circadian clock in crowing of red jungle fowls (Gallus gallus).
Ito, Shuichi; Hori, Shuho; Hirose, Makiko; Iwahara, Mari; Yatsushiro, Azusa; Matsumoto, Atsushi; Tanaka, Masayuki; Okamoto, Chinobu; Yayou, Ken-Ichi; Shimmura, Tsuyoshi
2017-04-01
The rhythmic locomotor behavior of flies and mice provides a phenotype for the identification of clock genes, and the underlying molecular mechanism is well studied. However, interestingly, when examining locomotor rhythm in the wild, several key laboratory-based assumptions on circadian behavior are not supported in natural conditions. The rooster crowing 'cock-a-doodle-doo' is a symbol of the break of dawn in many countries. Previously, we used domestic inbred roosters and showed that the timing of roosters' crowing is regulated by the circadian clock under laboratory conditions. However, it is still unknown whether the regulation of crowing by circadian clock is observed under natural conditions. Therefore, here we used red jungle fowls and first confirmed that similar crowing rhythms with domesticated chickens are observed in red jungle fowls under the laboratory conditions. Red jungle fowls show predawn crowing before light onset under 12:12 light : dim light conditions and the free-running rhythm of crowing under total dim light conditions. We next examined the crowing rhythms under semi-wild conditions. Although the crowing of red jungle fowls changed seasonally under semi-wild conditions, predawn crowing was observed before sunrise in all seasons. This evidence suggests that seasonally changed crowing of red jungle fowls is under the control of a circadian clock. © 2016 Japanese Society of Animal Science.
Regularity Results for a Class of Functionals with Non-Standard Growth
NASA Astrophysics Data System (ADS)
Acerbi, Emilio; Mingione, Giuseppe
We consider the integral functional
NASA Astrophysics Data System (ADS)
Baisden, W. T.; Canessa, S.
2013-01-01
In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14C to determine residence times, by estimating the amount of ‘bomb 14C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.
Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J
2017-01-01
This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.
Linde, Klaus; Rücker, Gerta; Schneider, Antonius; Kriston, Levente
2016-03-01
We aimed to evaluate the underlying assumptions of a network meta-analysis investigating which depression treatment works best in primary care and to highlight challenges and pitfalls of interpretation under consideration of these assumptions. We reviewed 100 randomized trials investigating pharmacologic and psychological treatments for primary care patients with depression. Network meta-analysis was carried out within a frequentist framework using response to treatment as outcome measure. Transitivity was assessed by epidemiologic judgment based on theoretical and empirical investigation of the distribution of trial characteristics across comparisons. Homogeneity and consistency were investigated by decomposing the Q statistic. There were important clinical and statistically significant differences between "pure" drug trials comparing pharmacologic substances with each other or placebo (63 trials) and trials including a psychological treatment arm (37 trials). Overall network meta-analysis produced results well comparable with separate meta-analyses of drug trials and psychological trials. Although the homogeneity and consistency assumptions were mostly met, we considered the transitivity assumption unjustifiable. An exchange of experience between reviewers and, if possible, some guidance on how reviewers addressing important clinical questions can proceed in situations where important assumptions for valid network meta-analysis are not met would be desirable. Copyright © 2016 Elsevier Inc. All rights reserved.
A critical literature review of health economic evaluations of rotavirus vaccination
Aballéa, Samuel; Millier, Aurélie; Quilici, Sibilia; Caroll, Stuart; Petrou, Stavros; Toumi, Mondher
2013-01-01
Two licensed vaccines are available to prevent RVGE in infants. A worldwide critical review of economic evaluations of these vaccines was conducted. The objective was to describe differences in methodologies, assumptions and inputs and determine the key factors driving differences in conclusions. 68 economic evaluations were reviewed. RV vaccination was found to be cost-effective in developing countries, while conclusions varied between studies in developed countries. Many studies found that vaccination was likely to be cost-effective under some scenarios, such as lower prices scenarios, inclusion of herd protection, and/or adoption of a societal perspective. Other reasons for variability included uncertainty around healthcare visits incidence and lack of consensus on quality of life (QoL) valuation for infants and caregivers. New evidence on the vaccination effectiveness in real-world, new ways of modeling herd protection and assessments of QoL in children could help more precisely define the conditions under which RV vaccination would be cost-effective in developed countries. PMID:23571226
Mertens, Nicole L; Russell, Bayden D; Connell, Sean D
2015-12-01
Ocean warming is anticipated to strengthen the persistence of turf-forming habitat, yet the concomitant elevation of grazer metabolic rates may accelerate per capita rates of consumption to counter turf predominance. Whilst this possibility of strong top-down control is supported by the metabolic theory of ecology (MTE), it assumes that consumer metabolism and consumption keep pace with increasing production. This assumption was tested by quantifying the metabolic rates of turfs and herbivorous gastropods under a series of elevated temperatures in which the ensuing production and consumption were observed. We discovered that as temperature increases towards near-future levels (year 2100), consumption rates of gastropods peak earlier than the rate of growth of producers. Hence, turfs have greater capacity to persist under near-future temperatures than the capacity for herbivores to counter their growth. These results suggest that whilst MTE predicts stronger top-down control, understanding whether consumer-producer responses are synchronous is key to assessing the future strength of top-down control.
Informing the NCA: EPA's Climate Change Impact and Risk Analysis Framework
NASA Astrophysics Data System (ADS)
Sarofim, M. C.; Martinich, J.; Kolian, M.; Crimmins, A. R.
2017-12-01
The Climate Change Impact and Risk Analysis (CIRA) framework is designed to quantify the physical impacts and economic damages in the United States under future climate change scenarios. To date, the framework has been applied to 25 sectors, using scenarios and projections developed for the Fourth National Climate Assessment. The strength of this framework has been in the use of consistent climatic, socioeconomic, and technological assumptions and inputs across the impact sectors to maximize the ease of cross-sector comparison. The results of the underlying CIRA sectoral analyses are informing the sustained assessment process by helping to address key gaps related to economic valuation and risk. Advancing capacity and scientific literature in this area has created opportunity to consider future applications and strengthening of the framework. This presentation will describe the CIRA framework, present results for various sectors such as heat mortality, air & water quality, winter recreation, and sea level rise, and introduce potential enhancements that can improve the utility of the framework for decision analysis.
Computational-Model-Based Analysis of Context Effects on Harmonic Expectancy.
Morimoto, Satoshi; Remijn, Gerard B; Nakajima, Yoshitaka
2016-01-01
Expectancy for an upcoming musical chord, harmonic expectancy, is supposedly based on automatic activation of tonal knowledge. Since previous studies implicitly relied on interpretations based on Western music theory, the underlying computational processes involved in harmonic expectancy and how it relates to tonality need further clarification. In particular, short chord sequences which cannot lead to unique keys are difficult to interpret in music theory. In this study, we examined effects of preceding chords on harmonic expectancy from a computational perspective, using stochastic modeling. We conducted a behavioral experiment, in which participants listened to short chord sequences and evaluated the subjective relatedness of the last chord to the preceding ones. Based on these judgments, we built stochastic models of the computational process underlying harmonic expectancy. Following this, we compared the explanatory power of the models. Our results imply that, even when listening to short chord sequences, internally constructed and updated tonal assumptions determine the expectancy of the upcoming chord.
Computational-Model-Based Analysis of Context Effects on Harmonic Expectancy
Morimoto, Satoshi; Remijn, Gerard B.; Nakajima, Yoshitaka
2016-01-01
Expectancy for an upcoming musical chord, harmonic expectancy, is supposedly based on automatic activation of tonal knowledge. Since previous studies implicitly relied on interpretations based on Western music theory, the underlying computational processes involved in harmonic expectancy and how it relates to tonality need further clarification. In particular, short chord sequences which cannot lead to unique keys are difficult to interpret in music theory. In this study, we examined effects of preceding chords on harmonic expectancy from a computational perspective, using stochastic modeling. We conducted a behavioral experiment, in which participants listened to short chord sequences and evaluated the subjective relatedness of the last chord to the preceding ones. Based on these judgments, we built stochastic models of the computational process underlying harmonic expectancy. Following this, we compared the explanatory power of the models. Our results imply that, even when listening to short chord sequences, internally constructed and updated tonal assumptions determine the expectancy of the upcoming chord. PMID:27003807
Davis, Hayley; Ritchie, Euan G; Avitabile, Sarah; Doherty, Tim; Nimmo, Dale G
2018-04-01
Fire shapes the composition and functioning of ecosystems globally. In many regions, fire is actively managed to create diverse patch mosaics of fire-ages under the assumption that a diversity of post-fire-age classes will provide a greater variety of habitats, thereby enabling species with differing habitat requirements to coexist, and enhancing species diversity (the pyrodiversity begets biodiversity hypothesis). However, studies provide mixed support for this hypothesis. Here, using termite communities in a semi-arid region of southeast Australia, we test four key assumptions of the pyrodiversity begets biodiversity hypothesis (i) that fire shapes vegetation structure over sufficient time frames to influence species' occurrence, (ii) that animal species are linked to resources that are themselves shaped by fire and that peak at different times since fire, (iii) that species' probability of occurrence or abundance peaks at varying times since fire and (iv) that providing a diversity of fire-ages increases species diversity at the landscape scale. Termite species and habitat elements were sampled in 100 sites across a range of fire-ages, nested within 20 landscapes chosen to represent a gradient of low to high pyrodiversity. We used regression modelling to explore relationships between termites, habitat and fire. Fire affected two habitat elements (coarse woody debris and the cover of woody vegetation) that were associated with the probability of occurrence of three termite species and overall species richness, thus supporting the first two assumptions of the pyrodiversity hypothesis. However, this did not result in those species or species richness being affected by fire history per se. Consequently, landscapes with a low diversity of fire histories had similar numbers of termite species as landscapes with high pyrodiversity. Our work suggests that encouraging a diversity of fire-ages for enhancing termite species richness in this study region is not necessary.
FMRI group analysis combining effect estimates and their variances
Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Beauchamp, Michael S.; Cox, Robert W.
2012-01-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach practical. We recommend its use in lieu of the less accurate approach in the conventional group analysis. PMID:22245637
Davis, Hayley; Ritchie, Euan G.; Avitabile, Sarah; Doherty, Tim
2018-01-01
Fire shapes the composition and functioning of ecosystems globally. In many regions, fire is actively managed to create diverse patch mosaics of fire-ages under the assumption that a diversity of post-fire-age classes will provide a greater variety of habitats, thereby enabling species with differing habitat requirements to coexist, and enhancing species diversity (the pyrodiversity begets biodiversity hypothesis). However, studies provide mixed support for this hypothesis. Here, using termite communities in a semi-arid region of southeast Australia, we test four key assumptions of the pyrodiversity begets biodiversity hypothesis (i) that fire shapes vegetation structure over sufficient time frames to influence species' occurrence, (ii) that animal species are linked to resources that are themselves shaped by fire and that peak at different times since fire, (iii) that species’ probability of occurrence or abundance peaks at varying times since fire and (iv) that providing a diversity of fire-ages increases species diversity at the landscape scale. Termite species and habitat elements were sampled in 100 sites across a range of fire-ages, nested within 20 landscapes chosen to represent a gradient of low to high pyrodiversity. We used regression modelling to explore relationships between termites, habitat and fire. Fire affected two habitat elements (coarse woody debris and the cover of woody vegetation) that were associated with the probability of occurrence of three termite species and overall species richness, thus supporting the first two assumptions of the pyrodiversity hypothesis. However, this did not result in those species or species richness being affected by fire history per se. Consequently, landscapes with a low diversity of fire histories had similar numbers of termite species as landscapes with high pyrodiversity. Our work suggests that encouraging a diversity of fire-ages for enhancing termite species richness in this study region is not necessary. PMID:29765661
NASA Technical Reports Server (NTRS)
Thronson, Harley; Carberry, Chris; Cassady, R. J.; Cooke, Doug; Hopkins, Joshua; Perino, Maria A.; Kirkpatrick, Jim; Raftery, Michael; Westenberg, Artemis; Zucker, Richard
2013-01-01
There is a growing consensus that within two decades initial human missions to Mars are affordable under plausible budget assumptions and with sustained international participation. In response to this idea, a distinguished group of experts from the Mars exploration stakeholder communities attended the "Affording Mars" workshop at George Washington University in December, 2013. Participants reviewed and discussed scenarios for affordable and sustainable human and robotic exploration of Mars, the role of the International Space Station over the coming decade as the essential early step toward humans to Mars, possible "bridge" missions in the 2020s, key capabilities required for affordable initial missions, international partnerships, and a usable definition of affordability and sustainability. We report here the findings, observations, and recommendations that were agreed to at that workshop.
Pandora, Katherine
2009-01-01
The antebellum years in the United States were marked by vigorous debates about national identity in which issues of hierarchy, authority, and democratic values came under intense scrutiny. During this period, a prime objective of indigenous authors writing for American children was educating the young so they would be ready to assume their republican responsibilities. The question of how depictions and discussions about nature and science were deployed toward this end is explored by examining key texts about nature and science from the era's two most prolific and popular children's authors--Samuel Griswold Goodrich (1793-1860) and Jacob Abbott (1803-79)--and highlighting assumptions within these works about what the proper relationship should be between the search for scientific knowledge and the larger polity.
NASA Technical Reports Server (NTRS)
Divito, Ben L.; Butler, Ricky W.; Caldwell, James L.
1990-01-01
A high-level design is presented for a reliable computing platform for real-time control applications. Design tradeoffs and analyses related to the development of the fault-tolerant computing platform are discussed. The architecture is formalized and shown to satisfy a key correctness property. The reliable computing platform uses replicated processors and majority voting to achieve fault tolerance. Under the assumption of a majority of processors working in each frame, it is shown that the replicated system computes the same results as a single processor system not subject to failures. Sufficient conditions are obtained to establish that the replicated system recovers from transient faults within a bounded amount of time. Three different voting schemes are examined and proved to satisfy the bounded recovery time conditions.
Falsification Testing of Instrumental Variables Methods for Comparative Effectiveness Research.
Pizer, Steven D
2016-04-01
To demonstrate how falsification tests can be used to evaluate instrumental variables methods applicable to a wide variety of comparative effectiveness research questions. Brief conceptual review of instrumental variables and falsification testing principles and techniques accompanied by an empirical application. Sample STATA code related to the empirical application is provided in the Appendix. Comparative long-term risks of sulfonylureas and thiazolidinediones for management of type 2 diabetes. Outcomes include mortality and hospitalization for an ambulatory care-sensitive condition. Prescribing pattern variations are used as instrumental variables. Falsification testing is an easily computed and powerful way to evaluate the validity of the key assumption underlying instrumental variables analysis. If falsification tests are used, instrumental variables techniques can help answer a multitude of important clinical questions. © Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Ko, Heasin; Choi, Byung-Seok; Choe, Joong-Seon; Youn, Chun Ju
2018-01-01
Even though unconditional security of B92 quantum key distribution (QKD) system is based on the assumption of perfect positive-operator-valued measures, practical B92 systems only utilize two projective measurements. Unfortunately, such implementation may degrade the security of the B92 QKD system due to Eve's potential attack exploiting the imperfection of system. In this paper, we propose an advanced attack strategy with an unambiguous state discrimination (USD) measurement which makes practical B92 QKD systems insecure even under a lossless channel. In addition, we propose an effective countermeasure against the advanced USD attack model by monitoring double-click events. We further address a fundamental approach to make the B92 QKD system tolerable to attack strategies with USD measurements using a multi-qubit scheme.
Transonic Flow Computations Using Nonlinear Potential Methods
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Kwak, Dochan (Technical Monitor)
2000-01-01
This presentation describes the state of transonic flow simulation using nonlinear potential methods for external aerodynamic applications. The presentation begins with a review of the various potential equation forms (with emphasis on the full potential equation) and includes a discussion of pertinent mathematical characteristics and all derivation assumptions. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, is discussed. Key characteristics of all numerical algorithm types used for solving nonlinear potential equations, including steady, unsteady, space marching, and design methods, are described. Both spatial discretization and iteration scheme characteristics are examined. Numerical results for various aerodynamic applications are included throughout the presentation to highlight key discussion points. The presentation ends with concluding remarks and recommendations for future work. Overall. nonlinear potential solvers are efficient, highly developed and routinely used in the aerodynamic design environment for cruise conditions. Published by Elsevier Science Ltd. All rights reserved.
Involuntary Memories and Dissociative Amnesia: Assessing Key Assumptions in PTSD Research.
Berntsen, Dorthe; Rubin, David C
2014-03-01
Autobiographical memories of trauma victims are often described as disturbed in two ways. First, the trauma is frequently re-experienced in the form of involuntary, intrusive recollections. Second, the trauma is difficult to recall voluntarily (strategically); important parts may be totally or partially inaccessible-a feature known as dissociative amnesia. These characteristics are often mentioned by PTSD researchers and are included as PTSD symptoms in the DSM-IV-TR (American Psychiatric Association, 2000). In contrast, we show that both involuntary and voluntary recall are enhanced by emotional stress during encoding. We also show that the PTSD symptom in the diagnosis addressing dissociative amnesia, trouble remembering important aspects of the trauma is less well correlated with the remaining PTSD symptoms than the conceptual reversal of having trouble forgetting important aspects of the trauma. Our findings contradict key assumptions that have shaped PTSD research over the last 40 years.
Involuntary Memories and Dissociative Amnesia: Assessing Key Assumptions in PTSD Research
Berntsen, Dorthe; Rubin, David C.
2014-01-01
Autobiographical memories of trauma victims are often described as disturbed in two ways. First, the trauma is frequently re-experienced in the form of involuntary, intrusive recollections. Second, the trauma is difficult to recall voluntarily (strategically); important parts may be totally or partially inaccessible—a feature known as dissociative amnesia. These characteristics are often mentioned by PTSD researchers and are included as PTSD symptoms in the DSM-IV-TR (American Psychiatric Association, 2000). In contrast, we show that both involuntary and voluntary recall are enhanced by emotional stress during encoding. We also show that the PTSD symptom in the diagnosis addressing dissociative amnesia, trouble remembering important aspects of the trauma is less well correlated with the remaining PTSD symptoms than the conceptual reversal of having trouble forgetting important aspects of the trauma. Our findings contradict key assumptions that have shaped PTSD research over the last 40 years. PMID:25309832
The Effect of Sample Duration and Cue on a Double Temporal Discrimination
ERIC Educational Resources Information Center
Oliveira, Luis; Machado, Armando
2008-01-01
To test the assumptions of two models of timing, Scalar Expectancy Theory (SET) and Learning to Time (LeT), nine pigeons were exposed to two temporal discriminations, each signaled by a different cue. On half of the trials, pigeons learned to choose a red key after a 1.5-s horizontal bar and a green key after a 6-s horizontal bar; on the other…
Review of Emerging Resources: U.S. Shale Gas and Shale Oil Plays
2011-01-01
To gain a better understanding of the potential U.S. domestic shale gas and shale oil resources, the Energy Information Administration (EIA) commissioned INTEK, Inc. to develop an assessment of onshore lower 48 states technically recoverable shale gas and shale oil resources. This paper briefly describes the scope, methodology, and key results of the report and discusses the key assumptions that underlie the results.
Effect of source tampering in the security of quantum cryptography
NASA Astrophysics Data System (ADS)
Sun, Shi-Hai; Xu, Feihu; Jiang, Mu-Sheng; Ma, Xiang-Chun; Lo, Hoi-Kwong; Liang, Lin-Mei
2015-08-01
The security of source has become an increasingly important issue in quantum cryptography. Based on the framework of measurement-device-independent quantum key distribution (MDI-QKD), the source becomes the only region exploitable by a potential eavesdropper (Eve). Phase randomization is a cornerstone assumption in most discrete-variable (DV) quantum communication protocols (e.g., QKD, quantum coin tossing, weak-coherent-state blind quantum computing, and so on), and the violation of such an assumption is thus fatal to the security of those protocols. In this paper, we show a simple quantum hacking strategy, with commercial and homemade pulsed lasers, by Eve that allows her to actively tamper with the source and violate such an assumption, without leaving a trace afterwards. Furthermore, our attack may also be valid for continuous-variable (CV) QKD, which is another main class of QKD protocol, since, excepting the phase random assumption, other parameters (e.g., intensity) could also be changed, which directly determine the security of CV-QKD.
Guo, Shien; Getsios, Denis; Hernandez, Luis; Cho, Kelly; Lawler, Elizabeth; Altincatal, Arman; Lanes, Stephan; Blankenburg, Michael
2012-01-01
The growing understanding of the use of biomarkers in Alzheimer's disease (AD) may enable physicians to make more accurate and timely diagnoses. Florbetaben, a beta-amyloid tracer used with positron emission tomography (PET), is one of these diagnostic biomarkers. This analysis was undertaken to explore the potential value of florbetaben PET in the diagnosis of AD among patients with suspected dementia and to identify key data that are needed to further substantiate its value. A discrete event simulation was developed to conduct exploratory analyses from both US payer and societal perspectives. The model simulates the lifetime course of disease progression for individuals, evaluating the impact of their patient management from initial diagnostic work-up to final diagnosis. Model inputs were obtained from specific analyses of a large longitudinal dataset from the New England Veterans Healthcare System and supplemented with data from public data sources and assumptions. The analyses indicate that florbetaben PET has the potential to improve patient outcomes and reduce costs under certain scenarios. Key data on the use of florbetaben PET, such as its influence on time to confirmation of final diagnosis, treatment uptake, and treatment persistency, are unavailable and would be required to confirm its value. PMID:23326754
2016-03-24
McCarthy, Blood Meridian 1.1 General Issue Violent conflict between competing groups has been a pervasive and driving force for all of human history...It has evolved from small skirmishes between unarmed groups , wielding rudimentary weapons, to industrialized global conflagrations. Global...methodology is presented in Figure 2. Figure 2: Study Methodology 5 1.6 Study Assumptions and Limitations Assumptions Four underlying assumptions were
Precision Targeting: Filling the Gap
2013-05-20
using drones were often sensitive and controversial, they were used relentlessly in many key provinces (like Kandahar and Helmand) and because they...War. Westport, CT: Praeger Security International, 2006. Curry , Peter. “Small Wars are Local: Debunking Current Assumptions about Countering Small
Teaching Practices: Reexamining Assumptions.
ERIC Educational Resources Information Center
Spodek, Bernard, Ed.
This publication contains eight papers, selected from papers presented at the Bicentennial Conference on Early Childhood Education, that discuss different aspects of teaching practices. The first two chapters reexamine basic assumptions underlying the organization of curriculum experiences for young children. Chapter 3 discusses the need to…
Code of Federal Regulations, 2014 CFR
2014-07-01
... interest rate means the rate of interest applicable to underpayments of guaranteed benefits by the PBGC... of proof of death, individuals not located are presumed living. Missing participant annuity assumptions means the interest rate assumptions and actuarial methods for valuing benefits under § 4044.52 of...
Code of Federal Regulations, 2010 CFR
2010-07-01
... interest rate means the rate of interest applicable to underpayments of guaranteed benefits by the PBGC... of proof of death, individuals not located are presumed living. Missing participant annuity assumptions means the interest rate assumptions and actuarial methods for valuing benefits under § 4044.52 of...
Code of Federal Regulations, 2011 CFR
2011-07-01
... interest rate means the rate of interest applicable to underpayments of guaranteed benefits by the PBGC... of proof of death, individuals not located are presumed living. Missing participant annuity assumptions means the interest rate assumptions and actuarial methods for valuing benefits under § 4044.52 of...
Code of Federal Regulations, 2013 CFR
2013-07-01
... interest rate means the rate of interest applicable to underpayments of guaranteed benefits by the PBGC... of proof of death, individuals not located are presumed living. Missing participant annuity assumptions means the interest rate assumptions and actuarial methods for valuing benefits under § 4044.52 of...
Code of Federal Regulations, 2012 CFR
2012-07-01
... interest rate means the rate of interest applicable to underpayments of guaranteed benefits by the PBGC... of proof of death, individuals not located are presumed living. Missing participant annuity assumptions means the interest rate assumptions and actuarial methods for valuing benefits under § 4044.52 of...
Quantum Private Comparison of Equality Based on Five-Particle Cluster State
NASA Astrophysics Data System (ADS)
Chang, Yan; Zhang, Wen-Bo; Zhang, Shi-Bin; Wang, Hai-Chun; Yan, Li-Li; Han, Gui-Hua; Sheng, Zhi-Wei; Huang, Yuan-Yuan; Suo, Wang; Xiong, Jin-Xin
2016-12-01
A protocol for quantum private comparison of equality (QPCE) is proposed based on five-particle cluster state with the help of a semi-honest third party (TP). In our protocol, TP is allowed to misbehave on its own but can not conspire with either of two parties. Compared with most two-user QPCE protocols, our protocol not only can compare two groups of private information (each group has two users) in one execution, but also compare just two private information. Compared with the multi-user QPCE protocol proposed, our protocol is safer with more reasonable assumptions of TP. The qubit efficiency is computed and analyzed. Our protocol can also be generalized to the case of 2N participants with one TP. The 2N-participant protocol can compare two groups (each group has N private information) in one execution or just N private information. Supported by NSFC under Grant Nos. 61402058, 61572086, the Fund for Middle and Young Academic Leaders of CUIT under Grant No. J201511, the Science and Technology Support Project of Sichuan Province of China under Grant No. 2013GZX0137, the Fund for Young Persons Project of Sichuan Province of China under Grant No. 12ZB017, and the Foundation of Cyberspace Security Key Laboratory of Sichuan Higher Education Institutions under Grant No. szjj2014-074
NASA Astrophysics Data System (ADS)
Medlyn, B.; Jiang, M.; Zaehle, S.
2017-12-01
There is now ample experimental evidence that the response of terrestrial vegetation to rising atmospheric CO2 concentration is modified by soil nutrient availability. How to represent nutrient cycling processes is thus a key consideration for vegetation models. We have previously used model intercomparison to demonstrate that models incorporating different assumptions predict very different responses at Free-Air CO2 Enrichment experiments. Careful examination of model outputs has provided some insight into the reasons for the different model outcomes, but it is difficult to attribute outcomes to specific assumptions. Here we investigate the impact of individual assumptions in a generic plant carbon-nutrient cycling model. The G'DAY (Generic Decomposition And Yield) model is modified to incorporate alternative hypotheses for nutrient cycling. We analyse the impact of these assumptions in the model using a simple analytical approach known as "two-timing". This analysis identifies the quasi-equilibrium behaviour of the model at the time scales of the component pools. The analysis provides a useful mathematical framework for probing model behaviour and identifying the most critical assumptions for experimental study.
Fairman, Kathleen A; Motheral, Brenda R
2003-01-01
Pharmacoeconomic models of Helicobacter (H) pylori eradication have been frequently cited but never validated. Examine retrospectively whether H pylori pharmacoeconomic models direct decision makers to cost-effective therapeutic choices. We first replicated and then validated 2 models, replacing model assumptions with empirical data from a multipayer claims database. Database subjects were 435 commercially insured U.S. patients treated with bismuthmetronidazole- tetracycline (BMT), proton pump inhibitor (PPI)-clarithromycin, or PPI-amoxicillin. Patients met >1 clinical requirement (ulcer disease, gastritis/duodenitis, stomach function disorder, abdominal pain, H pylori infection, endoscopy, or H pylori assay). Sensitivity analyses included only patients with ulcer diagnosis or gastrointestinal specialist care. Outcome measures were: (1) rates of eradication retreatment; (2) use of office visits, hospitalizations, endoscopies, and antisecretory medication; and (3) cost per effectively treated (nonretreated) patient. Model results overstated the cost-effectiveness of PPI-clarithromycin and underestimated the cost-effectiveness of BMT. Prior to empirical adjustment, costs per effectively treated patient were 1,001 US dollars, 980 US dollars, and 1,730 US dollars for BMT, PPIclarithromycin, and PPI-amoxicillin, respectively. Estimates after adjustment were US dollars for BMT, 1,118 US dollars for PPI-clarithromycin, and 1,131 US dollars for PPI-amoxicillin. Key model assumptions that proved retrospectively incorrect were largely unsupported by either empirical evidence or systematic assessment of expert opinion. Organizations with access to medical and pharmacy claims databases should test key assumptions of influential models to determine their validity. Journal peer-review processes should pay particular attention to the basis of model assumptions.
Are Assumptions of Well-Known Statistical Techniques Checked, and Why (Not)?
Hoekstra, Rink; Kiers, Henk A. L.; Johnson, Addie
2012-01-01
A valid interpretation of most statistical techniques requires that one or more assumptions be met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another explanation could be that violations of assumptions are rarely checked for in the first place. We studied whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. Participants were asked to analyze the data as they would their own data, for which often used and well-known techniques such as the t-procedure, ANOVA and regression (or non-parametric alternatives) were required. It was found that the assumptions of the techniques were rarely checked, and that if they were, it was regularly by means of a statistical test. Interviews afterward revealed a general lack of knowledge about assumptions, the robustness of the techniques with regards to the assumptions, and how (or whether) assumptions should be checked. These data suggest that checking for violations of assumptions is not a well-considered choice, and that the use of statistics can be described as opportunistic. PMID:22593746
NASA Astrophysics Data System (ADS)
Mignone, B. K.
2008-12-01
Effective solutions to the climate change problem will require unprecedented cooperation across space, continuity across time and coordination between disciplines. One well-known methodology for synthesizing the lessons of physical science, energy engineering and economics is integrated assessment. Typically, integrated assessment models use scientific and technological relationships as physical constraints in a larger macroeconomic optimization that is designed to either balance the costs and benefits of climate change mitigation or find the least-cost path to an exogenously prescribed endpoint (e.g. atmospheric CO2 stabilization). The usefulness of these models depends to a large extent on the quality of the assumptions and the relevance of the outcome metrics chosen by the user. In this study, I show how a scientifically-based emissions reduction scenario can be combined with engineering-based assumptions about the energy system (e.g. estimates of the marginal cost premium of carbon-free technology) to yield insights about the price path of CO2 under a future regulatory regime. I then show how this outcome metric (carbon price) relates to key decisions about the design of a future cap-and-trade system and the way in which future carbon markets may be regulated.
The Fusion Gain Analysis of the Inductively Driven Liner Compression Based Fusion
NASA Astrophysics Data System (ADS)
Shimazu, Akihisa; Slough, John
2016-10-01
An analytical analysis of the fusion gain expected in the inductively driven liner compression (IDLC) based fusion is conducted to identify the fusion gain scaling at various operating conditions. The fusion based on the IDLC is a magneto-inertial fusion concept, where a Field-Reversed Configuration (FRC) plasmoid is compressed via the inductively-driven metal liner to drive the FRC to fusion conditions. In the past, an approximate scaling law for the expected fusion gain for the IDLC based fusion was obtained under the key assumptions of (1) D-T fuel at 5-40 keV, (2) adiabatic scaling laws for the FRC dynamics, (3) FRC energy dominated by the pressure balance with the edge magnetic field at the peak compression, and (4) the liner dwell time being liner final diameter divided by the peak liner velocity. In this study, various assumptions made in the previous derivation is relaxed to study the change in the fusion gain scaling from the previous result of G ml1 / 2 El11 / 8 , where ml is the liner mass and El is the peak liner kinetic energy. The implication from the modified fusion gain scaling on the performance of the IDLC fusion reactor system is also explored.
The Red Queen lives: Epistasis between linked resistance loci.
Metzger, César M J A; Luijckx, Pepijn; Bento, Gilberto; Mariadassou, Mahendra; Ebert, Dieter
2016-02-01
A popular theory explaining the maintenance of genetic recombination (sex) is the Red Queen Theory. This theory revolves around the idea that time-lagged negative frequency-dependent selection by parasites favors rare host genotypes generated through recombination. Although the Red Queen has been studied for decades, one of its key assumptions has remained unsupported. The signature host-parasite specificity underlying the Red Queen, where infection depends on a match between host and parasite genotypes, relies on epistasis between linked resistance loci for which no empirical evidence exists. We performed 13 genetic crosses and tested over 7000 Daphnia magna genotypes for resistance to two strains of the bacterial pathogen Pasteuria ramosa. Results reveal the presence of strong epistasis between three closely linked resistance loci. One locus masks the expression of the other two, while these two interact to produce a single resistance phenotype. Changing a single allele on one of these interacting loci can reverse resistance against the tested parasites. Such a genetic mechanism is consistent with host and parasite specificity assumed by the Red Queen Theory. These results thus provide evidence for a fundamental assumption of this theory and provide a genetic basis for understanding the Red Queen dynamics in the Daphnia-Pasteuria system. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
ASP-G: an ASP-based method for finding attractors in genetic regulatory networks
Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine
2014-01-01
Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722
Template protection and its implementation in 3D face recognition systems
NASA Astrophysics Data System (ADS)
Zhou, Xuebing
2007-04-01
As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the statistical character of biometric templates from a number of biometric samples in the enrollment database. For the FRGC 3D face database, the small distinction of robustness and discriminative power between the classification results under the assumption of uniquely distributed templates and the ones under the assumption of Gaussian distributed templates is shown in our tests.
Robustness of risk maps and survey networks to knowledge gaps about a new invasive pest.
Yemshanov, Denys; Koch, Frank H; Ben-Haim, Yakov; Smith, William D
2010-02-01
In pest risk assessment it is frequently necessary to make management decisions regarding emerging threats under severe uncertainty. Although risk maps provide useful decision support for invasive alien species, they rarely address knowledge gaps associated with the underlying risk model or how they may change the risk estimates. Failure to recognize uncertainty leads to risk-ignorant decisions and miscalculation of expected impacts as well as the costs required to minimize these impacts. Here we use the information gap concept to evaluate the robustness of risk maps to uncertainties in key assumptions about an invading organism. We generate risk maps with a spatial model of invasion that simulates potential entries of an invasive pest via international marine shipments, their spread through a landscape, and establishment on a susceptible host. In particular, we focus on the question of how much uncertainty in risk model assumptions can be tolerated before the risk map loses its value. We outline this approach with an example of a forest pest recently detected in North America, Sirex noctilio Fabricius. The results provide a spatial representation of the robustness of predictions of S. noctilio invasion risk to uncertainty and show major geographic hotspots where the consideration of uncertainty in model parameters may change management decisions about a new invasive pest. We then illustrate how the dependency between the extent of uncertainties and the degree of robustness of a risk map can be used to select a surveillance network design that is most robust to knowledge gaps about the pest.
NASA Astrophysics Data System (ADS)
Dietze, Michael; Fuchs, Margret; Kreutzer, Sebastian
2016-04-01
Many modern approaches of radiometric dating or geochemical fingerprinting rely on sampling sedimentary deposits. A key assumption of most concepts is that the extracted grain-size fraction of the sampled sediment adequately represents the actual process to be dated or the source area to be fingerprinted. However, these assumptions are not always well constrained. Rather, they have to align with arbitrary, method-determined size intervals, such as "coarse grain" or "fine grain" with partly even different definitions. Such arbitrary intervals violate principal process-based concepts of sediment transport and can thus introduce significant bias to the analysis outcome (i.e., a deviation of the measured from the true value). We present a flexible numerical framework (numOlum) for the statistical programming language R that allows quantifying the bias due to any given analysis size interval for different types of sediment deposits. This framework is applied to synthetic samples from the realms of luminescence dating and geochemical fingerprinting, i.e. a virtual reworked loess section. We show independent validation data from artificially dosed and subsequently mixed grain-size proportions and we present a statistical approach (end-member modelling analysis, EMMA) that allows accounting for the effect of measuring the compound dosimetric history or geochemical composition of a sample. EMMA separates polymodal grain-size distributions into the underlying transport process-related distributions and their contribution to each sample. These underlying distributions can then be used to adjust grain-size preparation intervals to minimise the incorporation of "undesired" grain-size fractions.
An Analysis of the Economic Assumptions Underlying Fiscal Plans FY1981 - FY1984.
1986-06-01
OF THE ECONOMIC ASSUMPTIONS UNDERLYING FISCAL PLANS FY1981 - FY1984 by Robert Welch Beck June 1986 Thesis Advisor: P. M. CARRICK Approved for public ...DOWGRDIN SHEDLEApproved for public releace; it - 2b ECLSSIICAIONI DWNGAD G SHEDLEbut ion is unlimited. 4! PERFORMING ORGANIZATION REPORT NUMBER(S) S...SECURITY CLASSIFICATION OF T4 PAC~E All other editions are obsolete Approved for public release; distribution is unlimited. An Analysis of the
Device-independent security of quantum cryptography against collective attacks.
Acín, Antonio; Brunner, Nicolas; Gisin, Nicolas; Massar, Serge; Pironio, Stefano; Scarani, Valerio
2007-06-08
We present the optimal collective attack on a quantum key distribution protocol in the "device-independent" security scenario, where no assumptions are made about the way the quantum key distribution devices work or on what quantum system they operate. Our main result is a tight bound on the Holevo information between one of the authorized parties and the eavesdropper, as a function of the amount of violation of a Bell-type inequality.
Intergenerational resource transfers with random offspring numbers
Arrow, Kenneth J.; Levin, Simon A.
2009-01-01
A problem common to biology and economics is the transfer of resources from parents to children. We consider the issue under the assumption that the number of offspring is unknown and can be represented as a random variable. There are 3 basic assumptions. The first assumption is that a given body of resources can be divided into consumption (yielding satisfaction) and transfer to children. The second assumption is that the parents' welfare includes a concern for the welfare of their children; this is recursive in the sense that the children's welfares include concern for their children and so forth. However, the welfare of a child from a given consumption is counted somewhat differently (generally less) than that of the parent (the welfare of a child is “discounted”). The third assumption is that resources transferred may grow (or decline). In economic language, investment, including that in education or nutrition, is productive. Under suitable restrictions, precise formulas for the resulting allocation of resources are found, demonstrating that, depending on the shape of the utility curve, uncertainty regarding the number of offspring may or may not favor increased consumption. The results imply that wealth (stock of resources) will ultimately have a log-normal distribution. PMID:19617553
Assumptions Underlying the Use of Different Types of Simulations.
ERIC Educational Resources Information Center
Cunningham, J. Barton
1984-01-01
Clarifies appropriateness of certain simulation approaches by distinguishing between different types of simulations--experimental, predictive, evaluative, and educational--on the basis of purpose, assumptions, procedures, and criteria for evaluating. The kinds of questions each type best responds to are discussed. (65 references) (MBR)
ERIC Educational Resources Information Center
ROSS, JOHN ROBERT
THIS ANALYSIS OF UNDERLYING SYNTACTIC STRUCTURE IS BASED ON THE ASSUMPTION THAT THE PARTS OF SPEECH CALLED "VERBS" AND "ADJECTIVES" ARE TWO SUBCATEGORIES OF ONE MAJOR LEXICAL CATEGORY, "PREDICATE." FROM THIS ASSUMPTION, THE HYPOTHESIS IS ADVANCED THAT, IN LANGUAGES EXHIBITING THE COPULA, THE DEEP STRUCTURE OF SENTENCES CONTAINING PREDICATE…
24 CFR 58.4 - Assumption authority.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., decision-making, and action that would otherwise apply to HUD under NEPA and other provisions of law that... environmental review, decision-making and action for programs authorized by the Native American Housing... separate decision regarding assumption of responsibilities for each of these Acts and communicate that...
THE MODELING OF THE FATE AND TRANSPORT OF ENVIRONMENTAL POLLUTANTS
Current models that predict the fate of organic compounds released to the environment are based on the assumption that these compounds exist exclusively as neutral species. This assumption is untrue under many environmental conditions, as some molecules can exist as cations, anio...
Designer policy for carbon and biodiversity co-benefits under global change
NASA Astrophysics Data System (ADS)
Bryan, Brett A.; Runting, Rebecca K.; Capon, Tim; Perring, Michael P.; Cunningham, Shaun C.; Kragt, Marit E.; Nolan, Martin; Law, Elizabeth A.; Renwick, Anna R.; Eber, Sue; Christian, Rochelle; Wilson, Kerrie A.
2016-03-01
Carbon payments can help mitigate both climate change and biodiversity decline through the reforestation of agricultural land. However, to achieve biodiversity co-benefits, carbon payments often require support from other policy mechanisms such as regulation, targeting, and complementary incentives. We evaluated 14 policy mechanisms for supplying carbon and biodiversity co-benefits through reforestation of carbon plantings (CP) and environmental plantings (EP) in Australia’s 85.3 Mha agricultural land under global change. The reference policy--uniform payments (bidders are paid the same price) with land-use competition (both CP and EP eligible for payments), targeting carbon--achieved significant carbon sequestration but negligible biodiversity co-benefits. Land-use regulation (only EP eligible) and two additional incentives complementing the reference policy (biodiversity premium, carbon levy) increased biodiversity co-benefits, but mostly inefficiently. Discriminatory payments (bidders are paid their bid price) with land-use competition were efficient, and with multifunctional targeting of both carbon and biodiversity co-benefits increased the biodiversity co-benefits almost 100-fold. Our findings were robust to uncertainty in global outlook, and to key agricultural productivity and land-use adoption assumptions. The results suggest clear policy directions, but careful mechanism design will be key to realising these efficiencies in practice. Choices remain for society about the amount of carbon and biodiversity co-benefits desired, and the price it is prepared to pay for them.
Quantum attack-resistent certificateless multi-receiver signcryption scheme.
Li, Huixian; Chen, Xubao; Pang, Liaojun; Shi, Weisong
2013-01-01
The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC), which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC) based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards.
Handling Conflict in the Work Environment.
ERIC Educational Resources Information Center
Brewer, Ernest W.
1997-01-01
Discussion of workplace conflict management examines erroneous assumptions inherent in traditional reaction patterns, considers key elements of planning for conflict prevention, and some workplace strategies to help minimize conflicts. Several approaches to conflict management, and their outcomes, are highlighted, and stages of the…
Modeling Imperfect Generator Behavior in Power System Operation Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim
A key component in power system operations is the use of computer models to quickly study and analyze different operating conditions and futures in an efficient manner. The output of these models are sensitive to the data used in them as well as the assumptions made during their execution. One typical assumption is that generators and load assets perfectly follow operator control signals. While this is a valid simulation assumption, generators may not always accurately follow control signals. This imperfect response of generators could impact cost and reliability metrics. This paper proposes a generator model that capture this imperfect behaviormore » and examines its impact on production costs and reliability metrics using a steady-state power system operations model. Preliminary analysis shows that while costs remain relatively unchanged, there could be significant impacts on reliability metrics.« less
A challenge to lepton universality in B-meson decays
Ciezarek, Gregory; Franco Sevilla, Manuel; Hamilton, Brian; ...
2017-06-07
One of the key assumptions of the standard model of particle physics is that the interactions of the charged leptons, namely electrons, muons and taus, differ only because of their different masses. Whereas precision tests comparing processes involving electrons and muons have not revealed any definite violation of this assumption, recent studies of B-meson decays involving the higher-mass tau lepton have resulted in observations that challenge lepton universality at the level of four standard deviations. Here, a confirmation of these results would point to new particles or interactions, and could have profound implications for our understanding of particle physics.
Dental Education: Trends and Assumptions for the 21st Century
Sinkford, Jeanne C.
1987-01-01
Dental educational institutions, as components of university systems, must develop strategic plans for program development, resource allocation, evaluation, and continued financial support. This dynamic process will be accomplished in a competitive academic arena where program excellence and program relevance are key issues in the game of survival. This article focuses on issues and trends that form the basis for planning assumptions and initiatives into the next decade and into the 21st century. This is our challenge, this is our mission if we are to be catalysts for change in the future. PMID:3560255
NASA Technical Reports Server (NTRS)
Sanger, Eugen
1932-01-01
A method is presented for approximate static calculation, which is based on the customary assumption of rigid ribs, while taking into account the systematic errors in the calculation results due to this arbitrary assumption. The procedure is given in greater detail for semicantilever and cantilever wings with polygonal spar plan form and for wings under direct loading only. The last example illustrates the advantages of the use of influence lines for such wing structures and their practical interpretation.
NASA Astrophysics Data System (ADS)
Sun, Shi-Hai; Liang, Lin-Mei
2012-08-01
Phase randomization is a very important assumption in the BB84 quantum key distribution (QKD) system with weak coherent source; otherwise, eavesdropper may spy the final key. In this Letter, a stable and monitored active phase randomization scheme for the one-way and two-way QKD system is proposed and demonstrated in experiments. Furthermore, our scheme gives an easy way for Alice to monitor the degree of randomization in experiments. Therefore, we expect our scheme to become a standard part in future QKD systems due to its secure significance and feasibility.
Victoor, Aafke; Friele, Roland D; Delnoij, Diana M J; Rademakers, Jany J D J M
2012-12-03
In the Netherlands in 2006, a health insurance system reform took place in which regulated competition between insurers and providers is key. In this context, the government placed greater emphasis on patients being able to choose health insurers and providers as a precondition for competition. Patient choice became an instrument instead of solely a goal in itself. In the current study, we investigated the concept of 'patient choice' of healthcare providers, as postulated in the supporting documentation for this reform, because we wanted to try to understand the assumptions policy makers had regarding patient choice of healthcare providers. We searched policy documents for assumptions made by policy makers about patient choice of healthcare providers that underlie the health insurance system reform. Additionally, we held interviews with people who were involved in or closely followed the reform. Our study shows that the government paid much more attention to the instrumental goal of patient choice. Patients are assumed to be able to choose a provider rationally if a number of conditions are satisfied, e.g. the availability of enough comparative information. To help ensure those conditions were met, the Dutch government and other parties implemented a variety of supporting instruments. Various instruments have been put in place to ensure that patients can act as consumers on the healthcare market. Much less attention has been paid to the willingness and ability of patients to choose, i.e. choice as a value. There was also relatively little attention paid to the consequences on equity of outcomes if some patient groups are less inclined or able to choose actively.
Smith, Andrew M; Wells, Gary L; Lindsay, R C L; Penrod, Steven D
2017-04-01
Receiver Operating Characteristic (ROC) analysis has recently come in vogue for assessing the underlying discriminability and the applied utility of lineup procedures. Two primary assumptions underlie recommendations that ROC analysis be used to assess the applied utility of lineup procedures: (a) ROC analysis of lineups measures underlying discriminability, and (b) the procedure that produces superior underlying discriminability produces superior applied utility. These same assumptions underlie a recently derived diagnostic-feature detection theory, a theory of discriminability, intended to explain recent patterns observed in ROC comparisons of lineups. We demonstrate, however, that these assumptions are incorrect when ROC analysis is applied to lineups. We also demonstrate that a structural phenomenon of lineups, differential filler siphoning, and not the psychological phenomenon of diagnostic-feature detection, explains why lineups are superior to showups and why fair lineups are superior to biased lineups. In the process of our proofs, we show that computational simulations have assumed, unrealistically, that all witnesses share exactly the same decision criteria. When criterial variance is included in computational models, differential filler siphoning emerges. The result proves dissociation between ROC curves and underlying discriminability: Higher ROC curves for lineups than for showups and for fair than for biased lineups despite no increase in underlying discriminability. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Koopmeiners, Joseph S; Hobbs, Brian P
2018-05-01
Randomized, placebo-controlled clinical trials are the gold standard for evaluating a novel therapeutic agent. In some instances, it may not be considered ethical or desirable to complete a placebo-controlled clinical trial and, instead, the placebo is replaced by an active comparator with the objective of showing either superiority or non-inferiority to the active comparator. In a non-inferiority trial, the experimental treatment is considered non-inferior if it retains a pre-specified proportion of the effect of the active comparator as represented by the non-inferiority margin. A key assumption required for valid inference in the non-inferiority setting is the constancy assumption, which requires that the effect of the active comparator in the non-inferiority trial is consistent with the effect that was observed in previous trials. It has been shown that violations of the constancy assumption can result in a dramatic increase in the rate of incorrectly concluding non-inferiority in the presence of ineffective or even harmful treatment. In this paper, we illustrate how Bayesian hierarchical modeling can be used to facilitate multi-source smoothing of the data from the current trial with the data from historical studies, enabling direct probabilistic evaluation of the constancy assumption. We then show how this result can be used to adapt the non-inferiority margin when the constancy assumption is violated and present simulation results illustrating that our method controls the type-I error rate when the constancy assumption is violated, while retaining the power of the standard approach when the constancy assumption holds. We illustrate our adaptive procedure using a non-inferiority trial of raltegravir, an antiretroviral drug for the treatment of HIV.
Koopmeiners, Joseph S.; Hobbs, Brian P.
2016-01-01
Randomized, placebo-controlled clinical trials are the gold standard for evaluating a novel therapeutic agent. In some instances, it may not be considered ethical or desirable to complete a placebo-controlled clinical trial and, instead, the placebo is replaced by an active comparator (AC) with the objective of showing either superiority or non-inferiority to the AC. In a non-inferiority trial, the experimental treatment is considered non-inferior if it retains a pre-specified proportion of the effect of the AC as represented by the non-inferiority margin. A key assumption required for valid inference in the non-inferiority setting is the constancy assumption, which requires that the effect of the AC in the non-inferiority trial is consistent with the effect that was observed in previous trials. It has been shown that violations of the constancy assumption can result in a dramatic increase in the rate of incorrectly concluding non-inferiority in the presence of ineffective or even harmful treatment. In this paper, we illustrate how Bayesian hierarchical modeling can be used to facilitate multi-source smoothing of the data from the current trial with the data from historical studies, enabling direct probabilistic evaluation of the constancy assumption. We then show how this result can be used to adapt the non-inferiority margin when the constancy assumption is violated and present simulation results illustrating that our method controls the type-I error rate when the constancy assumption is violated, while retaining the power of the standard approach when the constancy assumption holds. We illustrate our adaptive procedure using a non-inferiority trial of raltegravir, an antiretroviral drug for the treatment of HIV. PMID:27587591
ERIC Educational Resources Information Center
Webb, P. Taylor
2014-01-01
This article places Michel Foucault's concept of "problematization" in relation to educational policy research. My goal is to examine a key assumption of policy related to "solving problems" through such technologies. I discuss the potential problematization has to alter conceptions of policy research; and, through this…
On the Use of Rank Tests and Estimates in the Linear Model.
1982-06-01
assumption A5, McKean and Hettmansperger (1976) show that 10 w (W(N-c) - W (c+l))/ (2Z /2) (14) where 2Z is the 1-a interpercentile range of the standard...r(.75n) - r(.25n)) (13) The window width h incorporates a resistant estimate of scale, then interquartile range of the residuals, and a normalizing...alternative estimate of i is available with the additional assumption of symmetry of the error distribution. ASSUMPTION: A5. Suppose the underlying error
Fourier's law of heat conduction: quantum mechanical master equation analysis.
Wu, Lian-Ao; Segal, Dvira
2008-06-01
We derive the macroscopic Fourier's Law of heat conduction from the exact gain-loss time convolutionless quantum master equation under three assumptions for the interaction kernel. To second order in the interaction, we show that the first two assumptions are natural results of the long time limit. The third assumption can be satisfied by a family of interactions consisting of an exchange effect. The pure exchange model directly leads to energy diffusion in a weakly coupled spin- 12 chain.
Insecurity of Detector-Device-Independent Quantum Key Distribution.
Sajeed, Shihan; Huang, Anqi; Sun, Shihai; Xu, Feihu; Makarov, Vadim; Curty, Marcos
2016-12-16
Detector-device-independent quantum key distribution (DDI-QKD) held the promise of being robust to detector side channels, a major security loophole in quantum key distribution (QKD) implementations. In contrast to what has been claimed, however, we demonstrate that the security of DDI-QKD is not based on postselected entanglement, and we introduce various eavesdropping strategies that show that DDI-QKD is in fact insecure against detector side-channel attacks as well as against other attacks that exploit devices' imperfections of the receiver. Our attacks are valid even when the QKD apparatuses are built by the legitimate users of the system themselves, and thus, free of malicious modifications, which is a key assumption in DDI-QKD.
Reality check: the role of realism in stress reduction using media technology.
de Kort, Y A W; Ijsselsteijn, W A
2006-04-01
There is a growing interest in the use of virtual and other mediated environments for therapeutic purposes. However, in the domain of restorative environments, virtual reality (VR) technology has hardly been used. Here the tendency has been to use mediated real environments, striving for maximum visual realism. This use of photographic material is mainly based on research in aesthetics judgments that has demonstrated the validity of this type of simulations as representations of real environments. Thus, restoration therapy is developing under the untested assumption that photorealistic images have the optimal level of realism, while in therapeutic applications 'experiential realism' seems to be the key rather than visual realism. The present paper discusses this contrast and briefly describes data of three studies aimed at exploring the importance and meaning of realism in the context of restorative environments.
Ontology Extraction Tools: An Empirical Study with Educators
ERIC Educational Resources Information Center
Hatala, M.; Gasevic, D.; Siadaty, M.; Jovanovic, J.; Torniai, C.
2012-01-01
Recent research in Technology-Enhanced Learning (TEL) demonstrated several important benefits that semantic technologies can bring to the TEL domain. An underlying assumption for most of these research efforts is the existence of a domain ontology. The second unspoken assumption follows that educators will build domain ontologies for their…
Extracurricular Business Planning Competitions: Challenging the Assumptions
ERIC Educational Resources Information Center
Watson, Kayleigh; McGowan, Pauric; Smith, Paul
2014-01-01
Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…
Hybrid Approaches and Industrial Applications of Pattern Recognition,
1980-10-01
emphasized that the probability distribution in (9) is correct only under the assumption that P( wIx ) is known exactly. In practice this assumption will...sufficient precision. The alternative would be to take the probability distribution of estimates of P( wix ) into account in the analysis. However, from the
Diagnostic tools for nearest neighbors techniques when used with satellite imagery
Ronald E. McRoberts
2009-01-01
Nearest neighbors techniques are non-parametric approaches to multivariate prediction that are useful for predicting both continuous and categorical forest attribute variables. Although some assumptions underlying nearest neighbor techniques are common to other prediction techniques such as regression, other assumptions are unique to nearest neighbor techniques....
Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative
ERIC Educational Resources Information Center
Ahmed, Abdelhamid
2008-01-01
The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…
Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.
ERIC Educational Resources Information Center
Gleason, John M.
1993-01-01
This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)
Shattering the Glass Ceiling: Women in School Administration.
ERIC Educational Resources Information Center
Patterson, Jean A.
Consistent with national trends, white males hold the majority of public school administrator positions in North Carolina. This paper examines the barriers and underlying assumptions that have prevented women and minorities from gaining access to high-level positions in educational administration. These include: (1) the assumption that leadership…
Transferring Goods or Splitting a Resource Pool
ERIC Educational Resources Information Center
Dijkstra, Jacob; Van Assen, Marcel A. L. M.
2008-01-01
We investigated the consequences for exchange outcomes of the violation of an assumption underlying most social psychological research on exchange. This assumption is that the negotiated direct exchange of commodities between two actors (pure exchange) can be validly represented as two actors splitting a fixed pool of resources (split pool…
Preparing Democratic Education Leaders
ERIC Educational Resources Information Center
Young, Michelle D.
2010-01-01
Although it is common to hear people espouse the importance of education to ensuring a strong and vibrant democracy, the assumptions underlying such statements are rarely unpacked. Two of the most widespread, though not necessarily complimentary, assumptions include: (1) to truly participate in a democracy, citizens must be well educated; and (2)…
Commentary on Coefficient Alpha: A Cautionary Tale
ERIC Educational Resources Information Center
Green, Samuel B.; Yang, Yanyun
2009-01-01
The general use of coefficient alpha to assess reliability should be discouraged on a number of grounds. The assumptions underlying coefficient alpha are unlikely to hold in practice, and violation of these assumptions can result in nontrivial negative or positive bias. Structural equation modeling was discussed as an informative process both to…
Timber valuea matter of choice: a study of how end use assumptions affect timber values.
John H. Beuter
1971-01-01
The relationship between estimated timber values and actual timber prices is discussed. Timber values are related to how, where, and when the timber is used. An analysis demonstrates the relative values of a typical Douglas-fir stand under assumptions about timber use.
Mexican-American Cultural Assumptions and Implications.
ERIC Educational Resources Information Center
Carranza, E. Lou
The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as…
Operant conditioning of autobiographical memory retrieval.
Debeer, Elise; Raes, Filip; Williams, J Mark G; Craeynest, Miet; Hermans, Dirk
2014-01-01
Functional avoidance is considered as one of the key mechanisms underlying overgeneral autobiographical memory (OGM). According to this view OGM is regarded as a learned cognitive avoidance strategy, based on principles of operant conditioning; i.e., individuals learn to avoid the emotionally painful consequences associated with the retrieval of specific negative memories. The aim of the present study was to test one of the basic assumptions of the functional avoidance account, namely that autobiographical memory retrieval can be brought under operant control. Here 41 students were instructed to retrieve personal memories in response to 60 emotional cue words. Depending on the condition, they were punished with an aversive sound for the retrieval of specific or nonspecific memories in an operant conditioning procedure. Analyzes showed that the course of memory specificity significantly differed between conditions. After the procedure participants punished for nonspecific memories retrieved significantly more specific memories compared to participants punished for specific memories. However, whereas memory specificity significantly increased in participants punished for specific memories, it did not significantly decrease in participants punished for nonspecific memories. Thus, while our findings indicate that autobiographical memory retrieval can be brought under operant control, they do not support a functional avoidance view on OGM.
Assume-Guarantee Verification of Source Code with Design-Level Assumptions
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.
2004-01-01
Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.
Improving the use of crop models for risk assessment and climate change adaptation.
Challinor, Andrew J; Müller, Christoph; Asseng, Senthold; Deva, Chetan; Nicklin, Kathryn Jane; Wallach, Daniel; Vanuytrecht, Eline; Whitfield, Stephen; Ramirez-Villegas, Julian; Koehler, Ann-Kristin
2018-01-01
Crop models are used for an increasingly broad range of applications, with a commensurate proliferation of methods. Careful framing of research questions and development of targeted and appropriate methods are therefore increasingly important. In conjunction with the other authors in this special issue, we have developed a set of criteria for use of crop models in assessments of impacts, adaptation and risk. Our analysis drew on the other papers in this special issue, and on our experience in the UK Climate Change Risk Assessment 2017 and the MACSUR, AgMIP and ISIMIP projects. The criteria were used to assess how improvements could be made to the framing of climate change risks, and to outline the good practice and new developments that are needed to improve risk assessment. Key areas of good practice include: i. the development, running and documentation of crop models, with attention given to issues of spatial scale and complexity; ii. the methods used to form crop-climate ensembles, which can be based on model skill and/or spread; iii. the methods used to assess adaptation, which need broadening to account for technological development and to reflect the full range options available. The analysis highlights the limitations of focussing only on projections of future impacts and adaptation options using pre-determined time slices. Whilst this long-standing approach may remain an essential component of risk assessments, we identify three further key components: 1.Working with stakeholders to identify the timing of risks. What are the key vulnerabilities of food systems and what does crop-climate modelling tell us about when those systems are at risk?2.Use of multiple methods that critically assess the use of climate model output and avoid any presumption that analyses should begin and end with gridded output.3.Increasing transparency and inter-comparability in risk assessments. Whilst studies frequently produce ranges that quantify uncertainty, the assumptions underlying these ranges are not always clear. We suggest that the contingency of results upon assumptions is made explicit via a common uncertainty reporting format; and/or that studies are assessed against a set of criteria, such as those presented in this paper.
Walker, Anthony P.; Zaehle, Sönke; Medlyn, Belinda E.; ...
2015-04-27
Large uncertainty exists in model projections of the land carbon (C) sink response to increasing atmospheric CO 2. Free-Air CO 2 Enrichment (FACE) experiments lasting a decade or more have investigated ecosystem responses to a step change in atmospheric CO 2 concentration. To interpret FACE results in the context of gradual increases in atmospheric CO 2 over decades to centuries, we used a suite of seven models to simulate the Duke and Oak Ridge FACE experiments extended for 300 years of CO 2 enrichment. We also determine key modeling assumptions that drive divergent projections of terrestrial C uptake and evaluatemore » whether these assumptions can be constrained by experimental evidence. All models simulated increased terrestrial C pools resulting from CO 2 enrichment, though there was substantial variability in quasi-equilibrium C sequestration and rates of change. In two of two models that assume that plant nitrogen (N) uptake is solely a function of soil N supply, the net primary production response to elevated CO 2 became progressively N limited. In four of five models that assume that N uptake is a function of both soil N supply and plant N demand, elevated CO 2 led to reduced ecosystem N losses and thus progressively relaxed nitrogen limitation. Many allocation assumptions resulted in increased wood allocation relative to leaves and roots which reduced the vegetation turnover rate and increased C sequestration. Additionally, self-thinning assumptions had a substantial impact on C sequestration in two models. As a result, accurate representation of N process dynamics (in particular N uptake), allocation, and forest self-thinning is key to minimizing uncertainty in projections of future C sequestration in response to elevated atmospheric CO 2.« less
Tian, Guo-Liang; Li, Hui-Qiong
2017-08-01
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.
Parks, Sean A; McKelvey, Kevin S; Schwartz, Michael K
2013-02-01
The importance of movement corridors for maintaining connectivity within metapopulations of wild animals is a cornerstone of conservation. One common approach for determining corridor locations is least-cost corridor (LCC) modeling, which uses algorithms within a geographic information system to search for routes with the lowest cumulative resistance between target locations on a landscape. However, the presentation of multiple LCCs that connect multiple locations generally assumes all corridors contribute equally to connectivity, regardless of the likelihood that animals will use them. Thus, LCCs may overemphasize seldom-used longer routes and underemphasize more frequently used shorter routes. We hypothesize that, depending on conservation objectives and available biological information, weighting individual corridors on the basis of species-specific movement, dispersal, or gene flow data may better identify effective corridors. We tested whether locations of key connectivity areas, defined as the highest 75th and 90th percentile cumulative weighted value of approximately 155,000 corridors, shift under different weighting scenarios. In addition, we quantified the amount and location of private land that intersect key connectivity areas under each weighting scheme. Some areas that appeared well connected when analyzed with unweighted corridors exhibited much less connectivity compared with weighting schemes that discount corridors with large effective distances. Furthermore, the amount and location of key connectivity areas that intersected private land varied among weighting schemes. We believe biological assumptions and conservation objectives should be explicitly incorporated to weight corridors when assessing landscape connectivity. These results are highly relevant to conservation planning because on the basis of recent interest by government agencies and nongovernmental organizations in maintaining and enhancing wildlife corridors, connectivity will likely be an important criterion for prioritization of land purchases and swaps. ©2012 Society for Conservation Biology.
Sri Bhashyam, Sumitra; Montibeller, Gilberto
2016-04-01
A key objective for policymakers and analysts dealing with terrorist threats is trying to predict the actions that malicious agents may take. A recent trend in counterterrorism risk analysis is to model the terrorists' judgments, as these will guide their choices of such actions. The standard assumptions in most of these models are that terrorists are fully rational, following all the normative desiderata required for rational choices, such as having a set of constant and ordered preferences, being able to perform a cost-benefit analysis of their alternatives, among many others. However, are such assumptions reasonable from a behavioral perspective? In this article, we analyze the types of assumptions made across various counterterrorism analytical models that represent malicious agents' judgments and discuss their suitability from a descriptive point of view. We then suggest how some of these assumptions could be modified to describe terrorists' preferences more accurately, by drawing knowledge from the fields of behavioral decision research, politics, philosophy of choice, public choice, and conflict management in terrorism. Such insight, we hope, might help make the assumptions of these models more behaviorally valid for counterterrorism risk analysis. © 2016 The Authors Wound Repair and Regeneration published by Wiley Periodicals, Inc. on behalf of The Wound Healing Society.
Collins, Scott F.; Marcarelli, Amy M.; Baxter, Colden V.; Wipfli, Mark S.
2015-01-01
We critically evaluate some of the key ecological assumptions underpinning the use of nutrient replacement as a means of recovering salmon populations and a range of other organisms thought to be linked to productive salmon runs. These assumptions include: (1) nutrient mitigation mimics the ecological roles of salmon, (2) mitigation is needed to replace salmon-derived nutrients and stimulate primary and invertebrate production in streams, and (3) food resources in rearing habitats limit populations of salmon and resident fishes. First, we call into question assumption one because an array of evidence points to the multi-faceted role played by spawning salmon, including disturbance via redd-building, nutrient recycling by live fish, and consumption by terrestrial consumers. Second, we show that assumption two may require qualification based upon a more complete understanding of nutrient cycling and productivity in streams. Third, we evaluate the empirical evidence supporting food limitation of fish populations and conclude it has been only weakly tested. On the basis of this assessment, we urge caution in the application of nutrient mitigation as a management tool. Although applications of nutrients and other materials intended to mitigate for lost or diminished runs of Pacific salmon may trigger ecological responses within treated ecosystems, contributions of these activities toward actual mitigation may be limited.
Configuration Management, Capacity Planning Decision Support, Modeling and Simulation
1988-12-01
flow includes both top-down and bottom-up requirements. The flow also includes hardware, software and transfer acquisition, installation, operation ... management and upgrade as required. Satisfaction of a users needs and requirements is a difficult and detailed process. The key assumptions at this
Estimating Lake Volume from Limited Data: A Simple GIS Approach
Lake volume provides key information for estimating residence time or modeling pollutants. Methods for calculating lake volume have relied on dated technologies (e.g. planimeters) or used potentially inaccurate assumptions (e.g. volume of a frustum of a cone). Modern GIS provid...
Comparison of Two Methods for Detecting Alternative Splice Variants Using GeneChip® Exon Arrays
Fan, Wenhong; Stirewalt, Derek L.; Radich, Jerald P.; Zhao, Lueping
2011-01-01
The Affymetrix GeneChip Exon Array can be used to detect alternative splice variants. Microarray Detection of Alternative Splicing (MIDAS) and Partek® Genomics Suite (Partek® GS) are among the most popular analytical methods used to analyze exon array data. While both methods utilize statistical significance for testing, MIDAS and Partek® GS could produce somewhat different results due to different underlying assumptions. Comparing MIDAS and Partek® GS is quite difficult due to their substantially different mathematical formulations and assumptions regarding alternative splice variants. For meaningful comparison, we have used the previously published generalized probe model (GPM) which encompasses both MIDAS and Partek® GS under different assumptions. We analyzed a colon cancer exon array data set using MIDAS, Partek® GS and GPM. MIDAS and Partek® GS produced quite different sets of genes that are considered to have alternative splice variants. Further, we found that GPM produced results similar to MIDAS as well as to Partek® GS under their respective assumptions. Within the GPM, we show how discoveries relating to alternative variants can be quite different due to different assumptions. MIDAS focuses on relative changes in expression values across different exons within genes and tends to be robust but less efficient. Partek® GS, however, uses absolute expression values of individual exons within genes and tends to be more efficient but more sensitive to the presence of outliers. From our observations, we conclude that MIDAS and Partek® GS produce complementary results, and discoveries from both analyses should be considered. PMID:23675234
Efficient and Provable Secure Pairing-Free Security-Mediated Identity-Based Identification Schemes
Chin, Ji-Jian; Tan, Syh-Yuan; Heng, Swee-Huay; Phan, Raphael C.-W.
2014-01-01
Security-mediated cryptography was first introduced by Boneh et al. in 2001. The main motivation behind security-mediated cryptography was the capability to allow instant revocation of a user's secret key by necessitating the cooperation of a security mediator in any given transaction. Subsequently in 2003, Boneh et al. showed how to convert a RSA-based security-mediated encryption scheme from a traditional public key setting to an identity-based one, where certificates would no longer be required. Following these two pioneering papers, other cryptographic primitives that utilize a security-mediated approach began to surface. However, the security-mediated identity-based identification scheme (SM-IBI) was not introduced until Chin et al. in 2013 with a scheme built on bilinear pairings. In this paper, we improve on the efficiency results for SM-IBI schemes by proposing two schemes that are pairing-free and are based on well-studied complexity assumptions: the RSA and discrete logarithm assumptions. PMID:25207333
Efficient and provable secure pairing-free security-mediated identity-based identification schemes.
Chin, Ji-Jian; Tan, Syh-Yuan; Heng, Swee-Huay; Phan, Raphael C-W
2014-01-01
Security-mediated cryptography was first introduced by Boneh et al. in 2001. The main motivation behind security-mediated cryptography was the capability to allow instant revocation of a user's secret key by necessitating the cooperation of a security mediator in any given transaction. Subsequently in 2003, Boneh et al. showed how to convert a RSA-based security-mediated encryption scheme from a traditional public key setting to an identity-based one, where certificates would no longer be required. Following these two pioneering papers, other cryptographic primitives that utilize a security-mediated approach began to surface. However, the security-mediated identity-based identification scheme (SM-IBI) was not introduced until Chin et al. in 2013 with a scheme built on bilinear pairings. In this paper, we improve on the efficiency results for SM-IBI schemes by proposing two schemes that are pairing-free and are based on well-studied complexity assumptions: the RSA and discrete logarithm assumptions.
Martínez-Novo, Rodrigo; Lizcano, Emmánuel; Herrera-Racionero, Paloma; Miret-Pastor, Lluís
2018-02-01
Recent European policy highlights the need to promote local fishery and aquaculture by means of innovation and joint participation in fishery management as one of the keys to achieve the sustainability of our seas. However, the implicit assumptions held by the actors in the two main groups involved - innovators (scientists, businessmen and administration managers) and local fishermen - can complicate, perhaps even render impossible, mutual understanding and co-operation. A qualitative analysis of interviews with members of both groups in the Valencian Community (Spain) reveals those latent assumptions and their impact on the respective practices. The analysis shows that the innovation narrative in which one group is based and the inventions narrative used by the other one are rooted in two dramatically different, or even antagonistic, collective worldviews. Any environmental policy that implies these groups should take into account these strong discords.
Using Ecosystem Experiments to Improve Vegetation Models
Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; ...
2015-05-21
Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced amore » clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.« less
Ensminger, Amanda L.; Shawkey, Matthew D.; Lucas, Jeffrey R.; Fernández-Juricic, Esteban
2017-01-01
ABSTRACT Variation in male signal production has been extensively studied because of its relevance to animal communication and sexual selection. Although we now know much about the mechanisms that can lead to variation between males in the properties of their signals, there is still a general assumption that there is little variation in terms of how females process these male signals. Variation between females in signal processing may lead to variation between females in how they rank individual males, meaning that one single signal may not be universally attractive to all females. We tested this assumption in a group of female wild-caught brown-headed cowbirds (Molothrus ater), a species that uses a male visual signal (e.g. a wingspread display) to make its mate-choice decisions. We found that females varied in two key parameters of their visual sensory systems related to chromatic and achromatic vision: cone densities (both total and proportions) and cone oil droplet absorbance. Using visual chromatic and achromatic contrast modeling, we then found that this between-individual variation in visual physiology leads to significant between-individual differences in how females perceive chromatic and achromatic male signals. These differences may lead to variation in female preferences for male visual signals, which would provide a potential mechanism for explaining individual differences in mate-choice behavior. PMID:29247048
Ronald, Kelly L; Ensminger, Amanda L; Shawkey, Matthew D; Lucas, Jeffrey R; Fernández-Juricic, Esteban
2017-12-15
Variation in male signal production has been extensively studied because of its relevance to animal communication and sexual selection. Although we now know much about the mechanisms that can lead to variation between males in the properties of their signals, there is still a general assumption that there is little variation in terms of how females process these male signals. Variation between females in signal processing may lead to variation between females in how they rank individual males, meaning that one single signal may not be universally attractive to all females. We tested this assumption in a group of female wild-caught brown-headed cowbirds ( Molothrus ater ), a species that uses a male visual signal (e.g. a wingspread display) to make its mate-choice decisions. We found that females varied in two key parameters of their visual sensory systems related to chromatic and achromatic vision: cone densities (both total and proportions) and cone oil droplet absorbance. Using visual chromatic and achromatic contrast modeling, we then found that this between-individual variation in visual physiology leads to significant between-individual differences in how females perceive chromatic and achromatic male signals. These differences may lead to variation in female preferences for male visual signals, which would provide a potential mechanism for explaining individual differences in mate-choice behavior. © 2017. Published by The Company of Biologists Ltd.
Land-use change and greenhouse gas emissions from corn and cellulosic ethanol
2013-01-01
Background The greenhouse gas (GHG) emissions that may accompany land-use change (LUC) from increased biofuel feedstock production are a source of debate in the discussion of drawbacks and advantages of biofuels. Estimates of LUC GHG emissions focus mainly on corn ethanol and vary widely. Increasing the understanding of LUC GHG impacts associated with both corn and cellulosic ethanol will inform the on-going debate concerning their magnitudes and sources of variability. Results In our study, we estimate LUC GHG emissions for ethanol from four feedstocks: corn, corn stover, switchgrass, and miscanthus. We use new computable general equilibrium (CGE) results for worldwide LUC. U.S. domestic carbon emission factors are from state-level modelling with a surrogate CENTURY model and U.S. Forest Service data. This paper investigates the effect of several key domestic lands carbon content modelling parameters on LUC GHG emissions. International carbon emission factors are from the Woods Hole Research Center. LUC GHG emissions are calculated from these LUCs and carbon content data with Argonne National Laboratory’s Carbon Calculator for Land Use Change from Biofuels Production (CCLUB) model. Our results indicate that miscanthus and corn ethanol have the lowest (−10 g CO2e/MJ) and highest (7.6 g CO2e/MJ) LUC GHG emissions under base case modelling assumptions. The results for corn ethanol are lower than corresponding results from previous studies. Switchgrass ethanol base case results (2.8 g CO2e/MJ) were the most influenced by assumptions regarding converted forestlands and the fate of carbon in harvested wood products. They are greater than miscanthus LUC GHG emissions because switchgrass is a lower-yielding crop. Finally, LUC GHG emissions for corn stover are essentially negligible and insensitive to changes in model assumptions. Conclusions This research provides new insight into the influence of key carbon content modelling variables on LUC GHG emissions associated with the four bioethanol pathways we examined. Our results indicate that LUC GHG emissions may have a smaller contribution to the overall biofuel life cycle than previously thought. Additionally, they highlight the need for future advances in LUC GHG emissions estimation including improvements to CGE models and aboveground and belowground carbon content data. PMID:23575438
Why Are Experts Correlated? Decomposing Correlations between Judges
ERIC Educational Resources Information Center
Broomell, Stephen B.; Budescu, David V.
2009-01-01
We derive an analytic model of the inter-judge correlation as a function of five underlying parameters. Inter-cue correlation and the number of cues capture our assumptions about the environment, while differentiations between cues, the weights attached to the cues, and (un)reliability describe assumptions about the judges. We study the relative…
Contexts and Pragmatics Learning: Problems and Opportunities of the Study Abroad Research
ERIC Educational Resources Information Center
Taguchi, Naoko
2018-01-01
Despite different epistemologies and assumptions, all theories in second language (L2) acquisition emphasize the centrality of context in understanding L2 acquisition. Under the assumption that language emerges from use in context, the cognitivist approach focuses on distributions and properties of input to infer both learning objects and process…
Marking and Moderation in the UK: False Assumptions and Wasted Resources
ERIC Educational Resources Information Center
Bloxham, Sue
2009-01-01
This article challenges a number of assumptions underlying marking of student work in British universities. It argues that, in developing rigorous moderation procedures, we have created a huge burden for markers which adds little to accuracy and reliability but creates additional work for staff, constrains assessment choices and slows down…
29 CFR 4010.8 - Plan actuarial information.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... Assumptions for decrements other than mortality and retirement (such as turnover or disability) used to... than 25 years of service. Employee A is an active participant who is age 40 and has completed 5 years... entitled under the assumption that A works until age 58. (2) Example 2. Employee B is also an active...
ERIC Educational Resources Information Center
Diaz, Juan Jose; Handa, Sudhanshu
2006-01-01
Not all policy questions can be addressed by social experiments. Nonexperimental evaluation methods provide an alternative to experimental designs but their results depend on untestable assumptions. This paper presents evidence on the reliability of propensity score matching (PSM), which estimates treatment effects under the assumption of…
29 CFR 4044.53 - Mortality assumptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death benefits...), and (g) of this section to value benefits under § 4044.52. (b) Certain death benefits. If an annuity for one person is in pay status on the valuation date, and if the payment of a death benefit after the...
29 CFR 4044.53 - Mortality assumptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death benefits...), and (g) of this section to value benefits under § 4044.52. (b) Certain death benefits. If an annuity for one person is in pay status on the valuation date, and if the payment of a death benefit after the...
29 CFR 4044.53 - Mortality assumptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death benefits...), and (g) of this section to value benefits under § 4044.52. (b) Certain death benefits. If an annuity for one person is in pay status on the valuation date, and if the payment of a death benefit after the...
29 CFR 4044.53 - Mortality assumptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death benefits...), and (g) of this section to value benefits under § 4044.52. (b) Certain death benefits. If an annuity for one person is in pay status on the valuation date, and if the payment of a death benefit after the...
Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?
ERIC Educational Resources Information Center
Reardon, Sean F.; Raudenbush, Stephen W.
2013-01-01
The increasing availability of data from multi-site randomized trials provides a potential opportunity to use instrumental variables methods to study the effects of multiple hypothesized mediators of the effect of a treatment. We derive nine assumptions needed to identify the effects of multiple mediators when using site-by-treatment interactions…
An identifiable model for informative censoring
Link, W.A.; Wegman, E.J.; Gantz, D.T.; Miller, J.J.
1988-01-01
The usual model for censored survival analysis requires the assumption that censoring of observations arises only due to causes unrelated to the lifetime under consideration. It is easy to envision situations in which this assumption is unwarranted, and in which use of the Kaplan-Meier estimator and associated techniques will lead to unreliable analyses.
Biological control agents elevate hantavirus by subsidizing deer mouse populations
Dean E. Pearson; Ragan M. Callaway
2006-01-01
Biological control of exotic invasive plants using exotic insects is practiced under the assumption that biological control agents are safe if they do not directly attack non-target species. We tested this assumption by evaluating the potential for two host-specific biological control agents (Urophora spp.), widely established in North America for spotted...
Assumptions Underlying Curriculum Decisions in Australia: An American Perspective.
ERIC Educational Resources Information Center
Willis, George
An analysis of the cultural and historical context in which curriculum decisions are made in Australia and a comparison with educational assumptions in the United States is the purpose of this paper. Methodology is based on personal teaching experience and observation in Australia. Seven factors are identified upon which curricular decisions in…
Parabolic Systems with p, q-Growth: A Variational Approach
NASA Astrophysics Data System (ADS)
Bögelein, Verena; Duzaar, Frank; Marcellini, Paolo
2013-10-01
We consider the evolution problem associated with a convex integrand {f : {R}^{Nn}to [0,infty)} satisfying a non-standard p, q-growth assumption. To establish the existence of solutions we introduce the concept of variational solutions. In contrast to weak solutions, that is, mappings {u\\colon Ω_T to {R}^n} which solve partial_tu-div Df(Du)=0 weakly in {Ω_T}, variational solutions exist under a much weaker assumption on the gap q - p. Here, we prove the existence of variational solutions provided the integrand f is strictly convex and 2n/n+2 < p le q < p+1. These variational solutions turn out to be unique under certain mild additional assumptions on the data. Moreover, if the gap satisfies the natural stronger assumption 2le p le q < p+ minbig \\{1,4/n big \\}, we show that variational solutions are actually weak solutions. This means that solutions u admit the necessary higher integrability of the spatial derivative Du to satisfy the parabolic system in the weak sense, that is, we prove that uin L^q_locbig(0,T; W^{1,q}_loc(Ω,{R}^N)big).
A general method for handling missing binary outcome data in randomized controlled trials
Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen
2014-01-01
Aims The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. Design We propose a sensitivity analysis where standard analyses, which could include ‘missing = smoking’ and ‘last observation carried forward’, are embedded in a wider class of models. Setting We apply our general method to data from two smoking cessation trials. Participants A total of 489 and 1758 participants from two smoking cessation trials. Measurements The abstinence outcomes were obtained using telephone interviews. Findings The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. Conclusions A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. PMID:25171441
Framework for assessing key variable dependencies in loose-abrasive grinding and polishing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.S.; Aikens, D.M.; Brown, N.J.
1995-12-01
This memo describes a framework for identifying all key variables that determine the figuring performance of loose-abrasive lapping and polishing machines. This framework is intended as a tool for prioritizing R&D issues, assessing the completeness of process models and experimental data, and for providing a mechanism to identify any assumptions in analytical models or experimental procedures. Future plans for preparing analytical models or performing experiments can refer to this framework in establishing the context of the work.
Peak provoked craving: an alternative to smoking cue-reactivity.
Sayette, Michael A; Tiffany, Stephen T
2013-06-01
Smoking cue-exposure research has provided a powerful tool for examining cravings in the laboratory. A key attraction of this method is that tightly controlled experimental procedures can model craving experiences that are presumed to relate to addiction. Despite its appeal, key assumptions underlying the clinical relevance of smoking cue-reactivity studies have been questioned recently. For both conceptual and methodological reasons it may be difficult to tease apart cue-based and abstinence-based cravings. Moreover, conventional cue-reactivity procedures typically generate levels of craving with only minimal clinical relevance. We argue here that sometimes it is unfeasible-and in some instances conceptually misguided-to disentangle abstinence-based and cued components of cigarette cravings. In light of the challenges associated with cue-reactivity research, we offer an alternative approach to smoking cue-exposure experimental research focusing on peak provoked craving (PPC) states. The PPC approach uses nicotine-deprived smokers and focuses on urges during smoking cue-exposure without subtracting out urge ratings during control cue or baseline assessments. This design relies on two factors found in many cue-exposure studies-nicotine deprivation and exposure to explicit smoking cues-which, when combined, can create powerful craving states. The PPC approach retains key aspects of the cue-exposure method, and in many circumstances may be a viable design for studies examining robust laboratory-induced cravings. © 2012 The Authors, Addiction © 2012 Society for the Study of Addiction.
Causal analysis of ordinal treatments and binary outcomes under truncation by death.
Wang, Linbo; Richardson, Thomas S; Zhou, Xiao-Hua
2017-06-01
It is common that in multi-arm randomized trials, the outcome of interest is "truncated by death," meaning that it is only observed or well-defined conditioning on an intermediate outcome. In this case, in addition to pairwise contrasts, the joint inference for all treatment arms is also of interest. Under a monotonicity assumption we present methods for both pairwise and joint causal analyses of ordinal treatments and binary outcomes in presence of truncation by death. We illustrate via examples the appropriateness of our assumptions in different scientific contexts.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Robustness of location estimators under t-distributions: a literature review
NASA Astrophysics Data System (ADS)
Sumarni, C.; Sadik, K.; Notodiputro, K. A.; Sartono, B.
2017-03-01
The assumption of normality is commonly used in estimation of parameters in statistical modelling, but this assumption is very sensitive to outliers. The t-distribution is more robust than the normal distribution since the t-distributions have longer tails. The robustness measures of location estimators under t-distributions are reviewed and discussed in this paper. For the purpose of illustration we use the onion yield data which includes outliers as a case study and showed that the t model produces better fit than the normal model.
Validity in work-based assessment: expanding our horizons.
Govaerts, Marjan; van der Vleuten, Cees P M
2013-12-01
Although work-based assessments (WBA) may come closest to assessing habitual performance, their use for summative purposes is not undisputed. Most criticism of WBA stems from approaches to validity consistent with the quantitative psychometric framework. However, there is increasing research evidence that indicates that the assumptions underlying the predictive, deterministic framework of psychometrics may no longer hold. In this discussion paper we argue that meaningfulness and appropriateness of current validity evidence can be called into question and that we need alternative strategies to assessment and validity inquiry that build on current theories of learning and performance in complex and dynamic workplace settings. Drawing from research in various professional fields we outline key issues within the mechanisms of learning, competence and performance in the context of complex social environments and illustrate their relevance to WBA. In reviewing recent socio-cultural learning theory and research on performance and performance interpretations in work settings, we demonstrate that learning, competence (as inferred from performance) as well as performance interpretations are to be seen as inherently contextualised, and can only be under-stood 'in situ'. Assessment in the context of work settings may, therefore, be more usefully viewed as a socially situated interpretive act. We propose constructivist-interpretivist approaches towards WBA in order to capture and understand contextualised learning and performance in work settings. Theoretical assumptions underlying interpretivist assessment approaches call for a validity theory that provides the theoretical framework and conceptual tools to guide the validation process in the qualitative assessment inquiry. Basic principles of rigour specific to qualitative research have been established, and they can and should be used to determine validity in interpretivist assessment approaches. If used properly, these strategies generate trustworthy evidence that is needed to develop the validity argument in WBA, allowing for in-depth and meaningful information about professional competence. © 2013 John Wiley & Sons Ltd.
Application of Key Events and Analysis to Chemical Carcinogens and Noncarcinogens
The existence of thresholds for toxicants is a matter of debate in chemical rsk assessment and regulation. Current risk assessment methods are based on the assumption that, in the basense of sufficient data, carcinogenesis does not have a threshold, while non-carcinogenic endpoi...
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
Rigidity of quantum steering and one-sided device-independent verifiable quantum computation
NASA Astrophysics Data System (ADS)
Gheorghiu, Alexandru; Wallden, Petros; Kashefi, Elham
2017-02-01
The relationship between correlations and entanglement has played a major role in understanding quantum theory since the work of Einstein et al (1935 Phys. Rev. 47 777-80). Tsirelson proved that Bell states, shared among two parties, when measured suitably, achieve the maximum non-local correlations allowed by quantum mechanics (Cirel’son 1980 Lett. Math. Phys. 4 93-100). Conversely, Reichardt et al showed that observing the maximal correlation value over a sequence of repeated measurements, implies that the underlying quantum state is close to a tensor product of maximally entangled states and, moreover, that it is measured according to an ideal strategy (Reichardt et al 2013 Nature 496 456-60). However, this strong rigidity result comes at a high price, requiring a large number of entangled pairs to be tested. In this paper, we present a significant improvement in terms of the overhead by instead considering quantum steering where the device of the one side is trusted. We first demonstrate a robust one-sided device-independent version of self-testing, which characterises the shared state and measurement operators of two parties up to a certain bound. We show that this bound is optimal up to constant factors and we generalise the results for the most general attacks. This leads us to a rigidity theorem for maximal steering correlations. As a key application we give a one-sided device-independent protocol for verifiable delegated quantum computation, and compare it to other existing protocols, to highlight the cost of trust assumptions. Finally, we show that under reasonable assumptions, the states shared in order to run a certain type of verification protocol must be unitarily equivalent to perfect Bell states.
Bemrah, Nawel; Leblanc, Jean-Charles; Volatier, Jean-Luc
2008-01-01
The results of French intake estimates for 13 food additives prioritized by the methods proposed in the 2001 Report from the European Commission on Dietary Food Additive Intake in the European Union are reported. These 13 additives were selected using the first and second tiers of the three-tier approach. The first tier was based on theoretical food consumption data and the maximum permitted level of additives. The second tier used real individual food consumption data and the maximum permitted level of additives for the substances which exceeded the acceptable daily intakes (ADI) in the first tier. In the third tier reported in this study, intake estimates were calculated for the 13 additives (colours, preservatives, antioxidants, stabilizers, emulsifiers and sweeteners) according to two modelling assumptions corresponding to two different food habit scenarios (assumption 1: consumers consume foods that may or may not contain food additives, and assumption 2: consumers always consume foods that contain additives) when possible. In this approach, real individual food consumption data and the occurrence/use-level of food additives reported by the food industry were used. Overall, the results of the intake estimates are reassuring for the majority of additives studied since the risk of exceeding the ADI was low, except for nitrites, sulfites and annatto, whose ADIs were exceeded by either children or adult consumers or by both populations under one and/or two modelling assumptions. Under the first assumption, the ADI is exceeded for high consumers among adults for nitrites and sulfites (155 and 118.4%, respectively) and among children for nitrites (275%). Under the second assumption, the average nitrites dietary exposure in children exceeds the ADI (146.7%). For high consumers, adults exceed the nitrite and sulfite ADIs (223 and 156.4%, respectively) and children exceed the nitrite, annatto and sulfite ADIs (416.7, 124.6 and 130.6%, respectively).
Increased core body temperature in astronauts during long-duration space missions.
Stahn, Alexander C; Werner, Andreas; Opatz, Oliver; Maggioni, Martina A; Steinach, Mathias; von Ahlefeld, Victoria Weller; Moore, Alan; Crucian, Brian E; Smith, Scott M; Zwart, Sara R; Schlabs, Thomas; Mendt, Stefan; Trippel, Tobias; Koralewski, Eberhard; Koch, Jochim; Choukèr, Alexander; Reitz, Günther; Shang, Peng; Röcker, Lothar; Kirsch, Karl A; Gunga, Hanns-Christian
2017-11-23
Humans' core body temperature (CBT) is strictly controlled within a narrow range. Various studies dealt with the impact of physical activity, clothing, and environmental factors on CBT regulation under terrestrial conditions. However, the effects of weightlessness on human thermoregulation are not well understood. Specifically, studies, investigating the effects of long-duration spaceflight on CBT at rest and during exercise are clearly lacking. We here show that during exercise CBT rises higher and faster in space than on Earth. Moreover, we observed for the first time a sustained increased astronauts' CBT also under resting conditions. This increase of about 1 °C developed gradually over 2.5 months and was associated with augmented concentrations of interleukin-1 receptor antagonist, a key anti-inflammatory protein. Since even minor increases in CBT can impair physical and cognitive performance, both findings have a considerable impact on astronauts' health and well-being during future long-term spaceflights. Moreover, our findings also pinpoint crucial physiological challenges for spacefaring civilizations, and raise questions about the assumption of a thermoregulatory set point in humans, and our evolutionary ability to adapt to climate changes on Earth.
A process economic assessment of hydrocarbon biofuels production using chemoautotrophic organisms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, NE; Myers, JA; Tuerk, AL
Economic analysis of an ARPA-e Electrofuels (http://arpa-e.energy.gov/?q=arpa-e-programs/electrofuels) process is presented, utilizing metabolically engineered Rhodobacter capsulatus or Ralstonia eutropha to produce the C30+ hydrocarbon fuel, botryococcene, from hydrogen, carbon dioxide, and oxygen. The analysis is based on an Aspen plus (R) bioreactor model taking into account experimentally determined Rba. capsulatus and Rls. eutropha growth and maintenance requirements, reactor residence time, correlations for gas-liquid mass-transfer coefficient, gas composition, and specific cellular fuel productivity. Based on reactor simulation results encompassing technically relevant parameter ranges, the capital and operating costs of the process were estimated for 5000 bbl-fuel/day plant and used to predict fuelmore » cost. Under the assumptions used in this analysis and crude oil prices, the Levelized Cost of Electricity (LCOE) required for economic feasibility must be less than 2(sic)/kWh. While not feasible under current market prices and costs, this work identifies key variables impacting process cost and discusses potential alternative paths toward economic feasibility. (C) 2014 Elsevier Ltd. All rights reserved.« less
Finite Element-Based Mechanical Assessment of Bone Quality on the Basis of In Vivo Images.
Pahr, Dieter H; Zysset, Philippe K
2016-12-01
Beyond bone mineral density (BMD), bone quality designates the mechanical integrity of bone tissue. In vivo images based on X-ray attenuation, such as CT reconstructions, provide size, shape, and local BMD distribution and may be exploited as input for finite element analysis (FEA) to assess bone fragility. Further key input parameters of FEA are the material properties of bone tissue. This review discusses the main determinants of bone mechanical properties and emphasizes the added value, as well as the important assumptions underlying finite element analysis. Bone tissue is a sophisticated, multiscale composite material that undergoes remodeling but exhibits a rather narrow band of tissue mineralization. Mechanically, bone tissue behaves elastically under physiologic loads and yields by cracking beyond critical strain levels. Through adequate cell-orchestrated modeling, trabecular bone tunes its mechanical properties by volume fraction and fabric. With proper calibration, these mechanical properties may be incorporated in quantitative CT-based finite element analysis that has been validated extensively with ex vivo experiments and has been applied increasingly in clinical trials to assess treatment efficacy against osteoporosis.
Breakfast Skipping, Extreme Commutes, and the Sex Composition at Birth.
Mazumder, Bhashkar; Seeskin, Zachary
2015-01-01
A growing body of literature has shown that environmental exposures in the period around conception can affect the sex ratio at birth through selective attrition that favors the survival of female conceptuses. Glucose availability is considered a key indicator of the fetal environment, and its absence as a result of meal skipping may inhibit male survival. We hypothesize that breakfast skipping during pregnancy may lead to a reduction in the fraction of male births. Using time use data from the United States we show that women with commute times of 90 minutes or longer are 20 percentage points more likely to skip breakfast. Using U.S. census data we show that women with commute times of 90 minutes or longer are 1.2 percentage points less likely to have a male child under the age of 2. Under some assumptions, this implies that routinely skipping breakfast around the time of conception leads to a 6 percentage point reduction in the probability of a male child. Skipping breakfast during pregnancy may therefore constitute a poor environment for fetal health more generally.
Random matrix models, double-time Painlevé equations, and wireless relaying
NASA Astrophysics Data System (ADS)
Chen, Yang; Haq, Nazmus S.; McKay, Matthew R.
2013-06-01
This paper gives an in-depth study of a multiple-antenna wireless communication scenario in which a weak signal received at an intermediate relay station is amplified and then forwarded to the final destination. The key quantity determining system performance is the statistical properties of the signal-to-noise ratio (SNR) γ at the destination. Under certain assumptions on the encoding structure, recent work has characterized the SNR distribution through its moment generating function, in terms of a certain Hankel determinant generated via a deformed Laguerre weight. Here, we employ two different methods to describe the Hankel determinant. First, we make use of ladder operators satisfied by orthogonal polynomials to give an exact characterization in terms of a "double-time" Painlevé differential equation, which reduces to Painlevé V under certain limits. Second, we employ Dyson's Coulomb fluid method to derive a closed form approximation for the Hankel determinant. The two characterizations are used to derive closed-form expressions for the cumulants of γ, and to compute performance quantities of engineering interest.
Principles for circadian orchestration of metabolic pathways.
Thurley, Kevin; Herbst, Christopher; Wesener, Felix; Koller, Barbara; Wallach, Thomas; Maier, Bert; Kramer, Achim; Westermark, Pål O
2017-02-14
Circadian rhythms govern multiple aspects of animal metabolism. Transcriptome-, proteome- and metabolome-wide measurements have revealed widespread circadian rhythms in metabolism governed by a cellular genetic oscillator, the circadian core clock. However, it remains unclear if and under which conditions transcriptional rhythms cause rhythms in particular metabolites and metabolic fluxes. Here, we analyzed the circadian orchestration of metabolic pathways by direct measurement of enzyme activities, analysis of transcriptome data, and developing a theoretical method called circadian response analysis. Contrary to a common assumption, we found that pronounced rhythms in metabolic pathways are often favored by separation rather than alignment in the times of peak activity of key enzymes. This property holds true for a set of metabolic pathway motifs (e.g., linear chains and branching points) and also under the conditions of fast kinetics typical for metabolic reactions. By circadian response analysis of pathway motifs, we determined exact timing separation constraints on rhythmic enzyme activities that allow for substantial rhythms in pathway flux and metabolite concentrations. Direct measurements of circadian enzyme activities in mouse skeletal muscle confirmed that such timing separation occurs in vivo.
Principles for circadian orchestration of metabolic pathways
Thurley, Kevin; Herbst, Christopher; Wesener, Felix; Koller, Barbara; Wallach, Thomas; Maier, Bert; Kramer, Achim
2017-01-01
Circadian rhythms govern multiple aspects of animal metabolism. Transcriptome-, proteome- and metabolome-wide measurements have revealed widespread circadian rhythms in metabolism governed by a cellular genetic oscillator, the circadian core clock. However, it remains unclear if and under which conditions transcriptional rhythms cause rhythms in particular metabolites and metabolic fluxes. Here, we analyzed the circadian orchestration of metabolic pathways by direct measurement of enzyme activities, analysis of transcriptome data, and developing a theoretical method called circadian response analysis. Contrary to a common assumption, we found that pronounced rhythms in metabolic pathways are often favored by separation rather than alignment in the times of peak activity of key enzymes. This property holds true for a set of metabolic pathway motifs (e.g., linear chains and branching points) and also under the conditions of fast kinetics typical for metabolic reactions. By circadian response analysis of pathway motifs, we determined exact timing separation constraints on rhythmic enzyme activities that allow for substantial rhythms in pathway flux and metabolite concentrations. Direct measurements of circadian enzyme activities in mouse skeletal muscle confirmed that such timing separation occurs in vivo. PMID:28159888
NASA Astrophysics Data System (ADS)
Milan, David; Heritage, George; Entwistle, Neil; Tooth, Stephen
2018-04-01
Some mixed bedrock-alluvial dryland rivers are known to undergo cycles of alluvial building during low flow periods, punctuated by stripping events during rare high magnitude flows. We focus on the Olifants River, Kruger National Park, South Africa, and present 2-D morphodynamic simulations of hydraulics and sediment deposition patterns over an exposed bedrock anastomosed pavement. We examine the assumptions underlying a previous conceptual model, namely that sedimentation occurs preferentially on bedrock highs. Our modelling results and local field observations in fact show that sediment thicknesses are greater over bedrock lows, suggesting these are the key loci for deposition, barform initiation and island building. During peak flows, velocities in the topographic lows tend to be lower than in intermediate topographic areas. It is likely that intermediate topographic areas supply sediment to the topographic lows at this flow stage, which is then deposited in the lows on the falling limb of the hydrograph as velocities reduce. Subsequent vegetation establishment on deposits in the topographic lows is likely to play a key role in additional sedimentation and vegetation succession, both through increasing the cohesive strength of alluvial units and by capturing new sediments and propagules.
Rosinska, Magdalena; Pantazis, Nikos; Janiec, Janusz; Pharris, Anastasia; Amato-Gauci, Andrew J; Quinten, Chantal; Ecdc Hiv/Aids Surveillance Network
2018-06-01
Accurate case-based surveillance data remain the key data source for estimating HIV burden and monitoring prevention efforts in Europe. We carried out a literature review and exploratory analysis of surveillance data regarding two crucial issues affecting European surveillance for HIV: missing data and reporting delay. Initial screening showed substantial variability of these data issues, both in time and across countries. In terms of missing data, the CD4+ cell count is the most problematic variable because of the high proportion of missing values. In 20 of 31 countries of the European Union/European Economic Area (EU/EEA), CD4+ counts are systematically missing for all or some years. One of the key challenges related to reporting delays is that countries undertake specific one-off actions in effort to capture previously unreported cases, and that these cases are subsequently reported with excessive delays. Slightly different underlying assumptions and effectively different models may be required for individual countries to adjust for missing data and reporting delays. However, using a similar methodology is recommended to foster harmonisation and to improve the accuracy and usability of HIV surveillance data at national and EU/EEA levels.
There is no clam with coats in the calm coast: delimiting the transposed-letter priming effect.
Duñabeitia, Jon Andoni; Perea, Manuel; Carreiras, Manuel
2009-10-01
In this article, we explore the transposed-letter priming effect (e.g., jugde-JUDGE vs. jupte-JUDGE), a phenomenon that taps into some key issues on how the brain encodes letter positions and has favoured the creation of new input coding schemes. However, almost all the empirical evidence from transposed-letter priming experiments comes from nonword primes (e.g., jugde-JUDGE). Indeed, previous evidence when using word-word pairs (e.g., causal-CASUAL) is not conclusive. Here, we conducted five masked priming lexical decision experiments that examined the relationship between pairs of real words that differed only in the transposition of two of their letters (e.g., CASUAL vs. CAUSAL). Results showed that, unlike transposed-letter nonwords, transposed-letter words do not seem to affect the identification time of their transposed-letter mates. Thus, prime lexicality is a key factor that modulates the magnitude of transposed-letter priming effects. These results are interpreted under the assumption of the existence of lateral inhibition processes occurring within the lexical level-which cancels out any orthographic facilitation due to the overlapping letters. We examine the implications of these findings for models of visual-word recognition.
Boerebach, Benjamin C. M.; Lombarts, Kiki M. J. M. H.; Scherpbier, Albert J. J.; Arah, Onyebuchi A.
2013-01-01
Background In fledgling areas of research, evidence supporting causal assumptions is often scarce due to the small number of empirical studies conducted. In many studies it remains unclear what impact explicit and implicit causal assumptions have on the research findings; only the primary assumptions of the researchers are often presented. This is particularly true for research on the effect of faculty’s teaching performance on their role modeling. Therefore, there is a need for robust frameworks and methods for transparent formal presentation of the underlying causal assumptions used in assessing the causal effects of teaching performance on role modeling. This study explores the effects of different (plausible) causal assumptions on research outcomes. Methods This study revisits a previously published study about the influence of faculty’s teaching performance on their role modeling (as teacher-supervisor, physician and person). We drew eight directed acyclic graphs (DAGs) to visually represent different plausible causal relationships between the variables under study. These DAGs were subsequently translated into corresponding statistical models, and regression analyses were performed to estimate the associations between teaching performance and role modeling. Results The different causal models were compatible with major differences in the magnitude of the relationship between faculty’s teaching performance and their role modeling. Odds ratios for the associations between teaching performance and the three role model types ranged from 31.1 to 73.6 for the teacher-supervisor role, from 3.7 to 15.5 for the physician role, and from 2.8 to 13.8 for the person role. Conclusions Different sets of assumptions about causal relationships in role modeling research can be visually depicted using DAGs, which are then used to guide both statistical analysis and interpretation of results. Since study conclusions can be sensitive to different causal assumptions, results should be interpreted in the light of causal assumptions made in each study. PMID:23936020
Shriver, K A
1986-01-01
Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.
Exploring super-Gaussianity toward robust information-theoretical time delay estimation.
Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee
2013-03-01
Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.
Information processing in dendrites I. Input pattern generalisation.
Gurney, K N
2001-10-01
In this paper and its companion, we address the question as to whether there are any general principles underlying information processing in the dendritic trees of biological neurons. In order to address this question, we make two assumptions. First, the key architectural feature of dendrites responsible for many of their information processing abilities is the existence of independent sub-units performing local non-linear processing. Second, any general functional principles operate at a level of abstraction in which neurons are modelled by Boolean functions. To accommodate these assumptions, we therefore define a Boolean model neuron-the multi-cube unit (MCU)-which instantiates the notion of the discrete functional sub-unit. We then use this model unit to explore two aspects of neural functionality: generalisation (in this paper) and processing complexity (in its companion). Generalisation is dealt with from a geometric viewpoint and is quantified using a new metric-the set of order parameters. These parameters are computed for threshold logic units (TLUs), a class of random Boolean functions, and MCUs. Our interpretation of the order parameters is consistent with our knowledge of generalisation in TLUs and with the lack of generalisation in randomly chosen functions. Crucially, the order parameters for MCUs imply that these functions possess a range of generalisation behaviour. We argue that this supports the general thesis that dendrites facilitate input pattern generalisation despite any local non-linear processing within functionally isolated sub-units.
Henzlova, Daniela; Menlove, Howard Olsen; Croft, Stephen; ...
2015-06-15
In the field of nuclear safeguards, passive neutron multiplicity counting (PNMC) is a method typically employed in non-destructive assay (NDA) of special nuclear material (SNM) for nonproliferation, verification and accountability purposes. PNMC is generally performed using a well-type thermal neutron counter and relies on the detection of correlated pairs or higher order multiplets of neutrons emitted by an assayed item. To assay SNM, a set of parameters for a given well-counter is required to link the measured multiplicity rates to the assayed item properties. Detection efficiency, die-away time, gate utilization factors (tightly connected to die-away time) as well as optimummore » gate width setting are among the key parameters. These parameters along with the underlying model assumptions directly affect the accuracy of the SNM assay. In this paper we examine the role of gate utilization factors and the single exponential die-away time assumption and their impact on the measurements for a range of plutonium materials. In addition, we examine the importance of item-optimized coincidence gate width setting as opposed to using a universal gate width value. Finally, the traditional PNMC based on multiplicity shift register electronics is extended to Feynman-type analysis and application of this approach to Pu mass assay is demonstrated.« less
Gomez, Gabriela B.; Borquez, Annick; Case, Kelsey K.; Wheelock, Ana; Vassall, Anna; Hankins, Catherine
2013-01-01
Background Cost-effectiveness studies inform resource allocation, strategy, and policy development. However, due to their complexity, dependence on assumptions made, and inherent uncertainty, synthesising, and generalising the results can be difficult. We assess cost-effectiveness models evaluating expected health gains and costs of HIV pre-exposure prophylaxis (PrEP) interventions. Methods and Findings We conducted a systematic review comparing epidemiological and economic assumptions of cost-effectiveness studies using various modelling approaches. The following databases were searched (until January 2013): PubMed/Medline, ISI Web of Knowledge, Centre for Reviews and Dissemination databases, EconLIT, and region-specific databases. We included modelling studies reporting both cost and expected impact of a PrEP roll-out. We explored five issues: prioritisation strategies, adherence, behaviour change, toxicity, and resistance. Of 961 studies retrieved, 13 were included. Studies modelled populations (heterosexual couples, men who have sex with men, people who inject drugs) in generalised and concentrated epidemics from Southern Africa (including South Africa), Ukraine, USA, and Peru. PrEP was found to have the potential to be a cost-effective addition to HIV prevention programmes in specific settings. The extent of the impact of PrEP depended upon assumptions made concerning cost, epidemic context, programme coverage, prioritisation strategies, and individual-level adherence. Delivery of PrEP to key populations at highest risk of HIV exposure appears the most cost-effective strategy. Limitations of this review include the partial geographical coverage, our inability to perform a meta-analysis, and the paucity of information available exploring trade-offs between early treatment and PrEP. Conclusions Our review identifies the main considerations to address in assessing cost-effectiveness analyses of a PrEP intervention—cost, epidemic context, individual adherence level, PrEP programme coverage, and prioritisation strategy. Cost-effectiveness studies indicating where resources can be applied for greatest impact are essential to guide resource allocation decisions; however, the results of such analyses must be considered within the context of the underlying assumptions made. Please see later in the article for the Editors' Summary PMID:23554579
Gomez, Gabriela B; Borquez, Annick; Case, Kelsey K; Wheelock, Ana; Vassall, Anna; Hankins, Catherine
2013-01-01
Cost-effectiveness studies inform resource allocation, strategy, and policy development. However, due to their complexity, dependence on assumptions made, and inherent uncertainty, synthesising, and generalising the results can be difficult. We assess cost-effectiveness models evaluating expected health gains and costs of HIV pre-exposure prophylaxis (PrEP) interventions. We conducted a systematic review comparing epidemiological and economic assumptions of cost-effectiveness studies using various modelling approaches. The following databases were searched (until January 2013): PubMed/Medline, ISI Web of Knowledge, Centre for Reviews and Dissemination databases, EconLIT, and region-specific databases. We included modelling studies reporting both cost and expected impact of a PrEP roll-out. We explored five issues: prioritisation strategies, adherence, behaviour change, toxicity, and resistance. Of 961 studies retrieved, 13 were included. Studies modelled populations (heterosexual couples, men who have sex with men, people who inject drugs) in generalised and concentrated epidemics from Southern Africa (including South Africa), Ukraine, USA, and Peru. PrEP was found to have the potential to be a cost-effective addition to HIV prevention programmes in specific settings. The extent of the impact of PrEP depended upon assumptions made concerning cost, epidemic context, programme coverage, prioritisation strategies, and individual-level adherence. Delivery of PrEP to key populations at highest risk of HIV exposure appears the most cost-effective strategy. Limitations of this review include the partial geographical coverage, our inability to perform a meta-analysis, and the paucity of information available exploring trade-offs between early treatment and PrEP. Our review identifies the main considerations to address in assessing cost-effectiveness analyses of a PrEP intervention--cost, epidemic context, individual adherence level, PrEP programme coverage, and prioritisation strategy. Cost-effectiveness studies indicating where resources can be applied for greatest impact are essential to guide resource allocation decisions; however, the results of such analyses must be considered within the context of the underlying assumptions made. Please see later in the article for the Editors' Summary.
NASA Astrophysics Data System (ADS)
Korucu, Ayse; Miller, Richard
2016-11-01
Direct numerical simulations (DNS) of temporally developing shear flames are used to investigate both equation of state (EOS) and unity-Lewis (Le) number assumption effects in hydrocarbon flames at elevated pressure. A reduced Kerosene / Air mechanism including a semi-global soot formation/oxidation model is used to study soot formation/oxidation processes in a temporarlly developing hydrocarbon shear flame operating at both atmospheric and elevated pressures for the cubic Peng-Robinson real fluid EOS. Results are compared to simulations using the ideal gas law (IGL). The results show that while the unity-Le number assumption with the IGL EOS under-predicts the flame temperature for all pressures, with the real fluid EOS it under-predicts the flame temperature for 1 and 35 atm and over-predicts the rest. The soot mass fraction, Ys, is only under-predicted for the 1 atm flame for both IGL and real gas fluid EOS models. While Ys is over-predicted for elevated pressures with IGL EOS, for the real gas EOS Ys's predictions are similar to results using a non-unity Le model derived from non-equilibrium thermodynamics and real diffusivities. Adopting the unity Le assumption is shown to cause misprediction of Ys, the flame temperature, and the mass fractions of CO, H and OH.
Quantum Attack-Resistent Certificateless Multi-Receiver Signcryption Scheme
Li, Huixian; Chen, Xubao; Pang, Liaojun; Shi, Weisong
2013-01-01
The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC), which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC) based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards. PMID:23967037
Consequences of Violated Equating Assumptions under the Equivalent Groups Design
ERIC Educational Resources Information Center
Lyren, Per-Erik; Hambleton, Ronald K.
2011-01-01
The equal ability distribution assumption associated with the equivalent groups equating design was investigated in the context of a selection test for admission to higher education. The purpose was to assess the consequences for the test-takers in terms of receiving improperly high or low scores compared to their peers, and to find strong…
Accountability Policies and Teacher Decision Making: Barriers to the Use of Data to Improve Practice
ERIC Educational Resources Information Center
Ingram, Debra; Louis, Karen Seashore; Schroeder, Roger G.
2004-01-01
One assumption underlying accountability policies is that results from standardized tests and other sources will be used to make decisions about school and classroom practice. We explore this assumption using data from a longitudinal study of nine high schools nominated as leading practitioners of Continuous Improvement (CI) practices. We use the…
Examining Assumptions in Second Language Research: A Postmodern View. CLCS Occasional Paper No. 45.
ERIC Educational Resources Information Center
Masny, Diana
In a review of literature on second language learning, an opinion is put forth that certain assumptions underlying the theory and the research have influenced researchers' attitudes about second language development and diminished the objectivity of the research. Furthermore the content of the research must then be examined within its…
Reliability of Children's Testimony in the Era of Developmental Reversals
ERIC Educational Resources Information Center
Brainerd, C. J.; Reyna, V. F.
2012-01-01
A hoary assumption of the law is that children are more prone to false-memory reports than adults, and hence, their testimony is less reliable than adults'. Since the 1980s, that assumption has been buttressed by numerous studies that detected declines in false memory between early childhood and young adulthood under controlled conditions.…
Two-Stage Modeling of Formaldehyde-Induced Tumor Incidence in the Rat—analysis of Uncertainties
This works extends the 2-stage cancer modeling of tumor incidence in formaldehyde-exposed rats carried out at the CIIT Centers for Health Research. We modify key assumptions, evaluate the effect of selected uncertainties, and develop confidence bounds on parameter estimates. Th...
Learning in Equity-Oriented Scale-Making Projects
ERIC Educational Resources Information Center
Jurow, A. Susan; Shea, Molly
2015-01-01
This article examines how new forms of learning and expertise are made to become consequential in changing communities of practice. We build on notions of scale making to understand how particular relations between practices, technologies, and people become meaningful across spatial and temporal trajectories of social action. A key assumption of…
Choice in School: Its Importance and Scope
ERIC Educational Resources Information Center
Yagnamurthy, Sreekanth
2013-01-01
The author reviews various factors defining school choice, and school curricula. A key assumption behind the rhetoric logic of school choice is the notion that parents actually choose from schools of varying quality. If parents choose from high-quality schools, choice policy will enhance educational opportunities. If, however, the considered…
Competing Conceptions of Delinquent Peer Relations.
ERIC Educational Resources Information Center
Hansell, Stephen; Wiatrowski, Michael D.
Both social ability and social disability models of delinquent peer relations have been developed to explain the social relations of delinquents. A key difference between these models is the assumption of normal social relations among delinquents in the social ability model, contrasted to the social ineptitude and lack of social skills attributed…
Others and the Problem of Community
ERIC Educational Resources Information Center
Fendler, Lynn
2006-01-01
Community building has been a key concern for a wide array of educational projects. Recently, educational theories concerned about social justice have begun to challenge assumptions about community in U.S. education by criticizing its tendencies toward assimilation and homogeneity. Such theories point out that a communitarian agenda excludes the…
School Psychology Services: Community-Based, First-Order Crisis Intervention during the Gulf War.
ERIC Educational Resources Information Center
Klingman, Avigdor
1992-01-01
Examines the community-based mental health preventive measures undertaken by the school psychology services in response to the missile attacks on Israel during the Gulf War. Attempts to report and delineate the major assumptions and components of some of the key interventions. (Author/NB)
Scripture, Sin and Salvation: Theological Conservatism Reconsidered
ERIC Educational Resources Information Center
Hempel, Lynn M.; Bartkowski, John P.
2008-01-01
Using insights from ethnographic studies of conservative Protestant congregations, the authors propose and test a refined conceptual model of theological conservatism that accounts for three key components of a theologically conservative worldview: (1. epistemology, a belief in the Bible as the inspired word of God, (2. ontology, assumptions about…
Relative dispersal ability of a key agricultural pest and its predators in an annual agroecosystem
USDA-ARS?s Scientific Manuscript database
In annual agroecosystems staggered planting dates, pesticide treatments, and harvesting events create a “shifting mosaic” of habitats that leads to frequent recolonization by herbivores and natural enemies. In these systems, an untested assumption is that herbivores have higher rates of dispersal re...
Towards Socially Just Pedagogies: Deleuzoguattarian Critical Disability Studies
ERIC Educational Resources Information Center
Goodley, Dan
2007-01-01
Socially just pedagogies call for sensitivity to politics and culture. In this paper I will uncover some key challenges in relation to working pedagogically with disabled people through the exploration of a critical disability studies perspective. First, I will unpack some of the assumptions that underpin educational understandings of…
Inertial Fusion Energy reactor design studies: Prometheus-L, Prometheus-H. Volume 2, Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waganer, L.M.; Driemeyer, D.E.; Lee, V.D.
1992-03-01
This report contains a review of design studies for Inertial Confinement reactor. This second of three volumes discussions is some detail the following: Objectives, requirements, and assumptions; rationale for design option selection; key technical issues and R&D requirements; and conceptual design selection and description.
EVALUATION OF HOST SPECIFIC PCR-BASED METHODS FOR THE IDENTIFICATION OF FECAL POLLUTION
Microbial Source Tracking (MST) is an approach to determine the origin of fecal pollution impacting a body of water. MST is based on the assumption that, given the appropriate method and indicator, the source of microbial pollution can be identified. One of the key elements of...
ERIC Educational Resources Information Center
Bolkan, San; Goodboy, Alan K.; Myers, Scott A.
2017-01-01
This study examined two effective teaching behaviors traditionally considered by instructional communication scholars to associate positively with students' academic experiences: instructor clarity and immediacy. Our study situated these teaching behaviors in a conditional process model that integrated two key assumptions about student learning:…
A Critical Realist Orientation to Learner Needs
ERIC Educational Resources Information Center
Ayers, David F.
2011-01-01
The objective of this essay is to propose critical realism as a philosophical middle way between two sets of ontological, epistemological, and methodological assumptions regarding learner needs. Key concepts of critical realism, a tradition in the philosophy of science, are introduced and applied toward an analysis of learner needs, resulting in…
Apprenticeship in Canada: Where's the Crisis?
ERIC Educational Resources Information Center
Meredith, John
2011-01-01
Unique features of the 2006 census, as well as 33 employer interviews, provide an opportunity to test key assumptions in Canadian apprenticeship policy. Apprenticeship is widely understood to be a crucial contributor to the national skills supply, but also an institution critically vulnerable to market failures, mainly due to the insecurity of…
Improved key-rate bounds for practical decoy-state quantum-key-distribution systems
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Zhao, Qi; Razavi, Mohsen; Ma, Xiongfeng
2017-01-01
The decoy-state scheme is the most widely implemented quantum-key-distribution protocol in practice. In order to account for the finite-size key effects on the achievable secret key generation rate, a rigorous statistical fluctuation analysis is required. Originally, a heuristic Gaussian-approximation technique was used for this purpose, which, despite its analytical convenience, was not sufficiently rigorous. The fluctuation analysis has recently been made rigorous by using the Chernoff bound. There is a considerable gap, however, between the key-rate bounds obtained from these techniques and that obtained from the Gaussian assumption. Here we develop a tighter bound for the decoy-state method, which yields a smaller failure probability. This improvement results in a higher key rate and increases the maximum distance over which secure key exchange is possible. By optimizing the system parameters, our simulation results show that our method almost closes the gap between the two previously proposed techniques and achieves a performance similar to that of conventional Gaussian approximations.
Chapman, Rachel; Smith, Lisa L; Bond, John W
2012-07-01
Car key burglary has recently become the focus of empirical investigation as offenders, no longer able to steal vehicles without first obtaining their keys, resort to "burgling" target properties. Research surrounding the modus operandi of these offenses is beginning to emerge; however, little attention has been paid to investigating the characteristics of car key burglary offenders. Challenging the assumption that car key burglary offenses are perpetrated by regular burglars, this study aims to differentiate between offenders. Logistic regression analysis of 110 car key and 110 regular burglary offenders revealed that car key burglars are more likely to have previous vehicle theft convictions and are also more likely to be detected on information supplied to the police than regular burglars. Regular burglars are more likely to have previous shoplifting convictions. It was concluded that car key burglars are a distinct sample of offenders and the implications of these findings are discussed. © 2012 American Academy of Forensic Sciences.
Change-in-ratio methods for estimating population size
Udevitz, Mark S.; Pollock, Kenneth H.; McCullough, Dale R.; Barrett, Reginald H.
2002-01-01
Change-in-ratio (CIR) methods can provide an effective, low cost approach for estimating the size of wildlife populations. They rely on being able to observe changes in proportions of population subclasses that result from the removal of a known number of individuals from the population. These methods were first introduced in the 1940’s to estimate the size of populations with 2 subclasses under the assumption of equal subclass encounter probabilities. Over the next 40 years, closed population CIR models were developed to consider additional subclasses and use additional sampling periods. Models with assumptions about how encounter probabilities vary over time, rather than between subclasses, also received some attention. Recently, all of these CIR models have been shown to be special cases of a more general model. Under the general model, information from additional samples can be used to test assumptions about the encounter probabilities and to provide estimates of subclass sizes under relaxations of these assumptions. These developments have greatly extended the applicability of the methods. CIR methods are attractive because they do not require the marking of individuals, and subclass proportions often can be estimated with relatively simple sampling procedures. However, CIR methods require a carefully monitored removal of individuals from the population, and the estimates will be of poor quality unless the removals induce substantial changes in subclass proportions. In this paper, we review the state of the art for closed population estimation with CIR methods. Our emphasis is on the assumptions of CIR methods and on identifying situations where these methods are likely to be effective. We also identify some important areas for future CIR research.
A Corticothalamic Circuit Model for Sound Identification in Complex Scenes
Otazu, Gonzalo H.; Leibold, Christian
2011-01-01
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668
Using directed information for influence discovery in interconnected dynamical systems
NASA Astrophysics Data System (ADS)
Rao, Arvind; Hero, Alfred O.; States, David J.; Engel, James Douglas
2008-08-01
Structure discovery in non-linear dynamical systems is an important and challenging problem that arises in various applications such as computational neuroscience, econometrics, and biological network discovery. Each of these systems have multiple interacting variables and the key problem is the inference of the underlying structure of the systems (which variables are connected to which others) based on the output observations (such as multiple time trajectories of the variables). Since such applications demand the inference of directed relationships among variables in these non-linear systems, current methods that have a linear assumption on structure or yield undirected variable dependencies are insufficient. Hence, in this work, we present a methodology for structure discovery using an information-theoretic metric called directed time information (DTI). Using both synthetic dynamical systems as well as true biological datasets (kidney development and T-cell data), we demonstrate the utility of DTI in such problems.
Take a Closer Look:Biofuels Can Support Environmental, Economic and Social Goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dale, Bruce E.; Anderson, James; Brown, Dr. Robert C.
The US Congress passed the Renewable Fuels Standard (RFS) seven years ago. Since then, biofuels have gone from darling to scapegoat for many environmentalists, policy makers, and the general public. The reasons for this shift are complex and include concerns about environmental degradation, uncertainties about impact on food security, new access to fossil fuels, and overly optimistic timetables. As a result, many people have written off biofuels. However, numerous studies indicate that biofuels, if managed sustainably, can help solve pressing environmental, social and economic problems (Figure 1). The scientific and policy communities should take a closer look by reviewing themore » key assumptions underlying opposition to biofuels and carefully consider the probable alternatives. Liquid fuels based on fossil raw materials are likely to come at increasing environmental cost. Sustainable futures require energy conservation, increased efficiency, and alternatives to fossil fuels, including biofuels.« less
Grounded theory in medical education research: AMEE Guide No. 70.
Watling, Christopher J; Lingard, Lorelei
2012-01-01
Qualitative research in general and the grounded theory approach in particular, have become increasingly prominent in medical education research in recent years. In this Guide, we first provide a historical perspective on the origin and evolution of grounded theory. We then outline the principles underlying the grounded theory approach and the procedures for doing a grounded theory study, illustrating these elements with real examples. Next, we address key critiques of grounded theory, which continue to shape how the method is perceived and used. Finally, pitfalls and controversies in grounded theory research are examined to provide a balanced view of both the potential and the challenges of this approach. This Guide aims to assist researchers new to grounded theory to approach their studies in a disciplined and rigorous fashion, to challenge experienced researchers to reflect on their assumptions, and to arm readers of medical education research with an approach to critically appraising the quality of grounded theory studies.
An Enhanced Privacy-Preserving Authentication Scheme for Vehicle Sensor Networks.
Zhou, Yousheng; Zhao, Xiaofeng; Jiang, Yi; Shang, Fengjun; Deng, Shaojiang; Wang, Xiaojun
2017-12-08
Vehicle sensor networks (VSNs) are ushering in a promising future by enabling more intelligent transportation systems and providing a more efficient driving experience. However, because of their inherent openness, VSNs are subject to a large number of potential security threats. Although various authentication schemes have been proposed for addressing security problems, they are not suitable for VSN applications because of their high computation and communication costs. Chuang and Lee have developed a trust-extended authentication mechanism (TEAM) for vehicle-to-vehicle communication using a transitive trust relationship, which they claim can resist various attacks. However, it fails to counter internal attacks because of the utilization of a shared secret key. In this paper, to eliminate the vulnerability of TEAM, an enhanced privacy-preserving authentication scheme for VSNs is constructed. The security of our proposed scheme is proven under the random oracle model based on the assumption of the computational Diffie-Hellman problem.
Geometric method for forming periodic orbits in the Lorenz system
NASA Astrophysics Data System (ADS)
Nicholson, S. B.; Kim, Eun-jin
2016-04-01
Many systems in nature are out of equilibrium and irreversible. The non-detailed balance observable representation (NOR) provides a useful methodology for understanding the evolution of such non-equilibrium complex systems, by mapping out the correlation between two states to a metric space where a small distance represents a strong correlation [1]. In this paper, we present the first application of the NOR to a continuous system and demonstrate its utility in controlling chaos. Specifically, we consider the evolution of a continuous system governed by the Lorenz equation and calculate the NOR by following a sufficient number of trajectories. We then show how to control chaos by converting chaotic orbits to periodic orbits by utilizing the NOR. We further discuss the implications of our method for potential applications given the key advantage that this method makes no assumptions of the underlying equations of motion and is thus extremely general.
Beyond the conventional understanding of water-rock reactivity
NASA Astrophysics Data System (ADS)
Fischer, Cornelius; Luttge, Andreas
2017-01-01
A common assumption is that water-rock reaction rates should converge to a mean value. There is, however, an emerging consensus on the genuine nature of reaction rate variations under identical chemical conditions. Thus, the further use of mean reaction rates for the prediction of material fluxes is environmentally and economically risky, manifest for example in the management of nuclear waste or the evolution of reservoir rocks. Surface-sensitive methods and resulting information about heterogeneous surface reactivity illustrate the inherent rate variability. Consequently, a statistical analysis was developed in order to quantify the heterogeneity of surface rates. We show how key components of the rate combine to give an overall rate and how the identification of those individual rate contributors provide mechanistic insight into complex heterogeneous reactions. This generates a paradigm change by proposing a new pathway to reaction model parameterization and for the prediction of reaction rates.
Fairness emergence from zero-intelligence agents
NASA Astrophysics Data System (ADS)
Duan, Wen-Qi; Stanley, H. Eugene
2010-02-01
Fairness plays a key role in explaining the emergence and maintenance of cooperation. Opponent-oriented social utility models were often proposed to explain the origins of fairness preferences in which agents take into account not only their own outcomes but are also concerned with the outcomes of their opponents. Here, we propose a payoff-oriented mechanism in which agents update their beliefs only based on the payoff signals of the previous ultimatum game, regardless of the behaviors and outcomes of the opponents themselves. Employing adaptive ultimatum game, we show that (1) fairness behaviors can emerge out even under such minimalist assumptions, provided that agents are capable of responding to their payoff signals, (2) the average game payoff per agent per round decreases with the increasing discrepancy rate between the average giving rate and the average asking rate, and (3) the belief update process will lead to 50%-50% fair split provided that there is no mutation in the evolutionary dynamics.
How a new 'public plan' could affect hospitals' finances and private insurance premiums.
Dobson, Allen; DaVanzo, Joan E; El-Gamil, Audrey M; Berger, Gregory
2009-01-01
Two key health reform bills in the House of Representatives and Senate include the option of a "public plan" as an additional source of health coverage. At least initially, the plan would primarily be structured to cover many of the uninsured and those who now have individual coverage. Because it is possible, and perhaps even likely, that this new public payer would pay less than private payers for the same services, such a plan could negatively affect hospital margins. Hospitals may attempt to recoup losses by shifting costs to private payers. We outline the financial pressures that hospitals and private payers could experience under various assumptions. High uninsured enrollment in a public plan would bolster hospital margins; however, this effect is reversed if the privately insured enter a public plan in large proportions, potentially stressing the hospital industry and increasing private insurance premiums.
Hutson, Alan D
2018-01-01
In this note, we develop a new and novel semi-parametric estimator of the survival curve that is comparable to the product-limit estimator under very relaxed assumptions. The estimator is based on a beta parametrization that warps the empirical distribution of the observed censored and uncensored data. The parameters are obtained using a pseudo-maximum likelihood approach adjusting the survival curve accounting for the censored observations. In the univariate setting, the new estimator tends to better extend the range of the survival estimation given a high degree of censoring. However, the key feature of this paper is that we develop a new two-group semi-parametric exact permutation test for comparing survival curves that is generally superior to the classic log-rank and Wilcoxon tests and provides the best global power across a variety of alternatives. The new test is readily extended to the k group setting. PMID:26988931
An Enhanced Privacy-Preserving Authentication Scheme for Vehicle Sensor Networks
Zhou, Yousheng; Zhao, Xiaofeng; Jiang, Yi; Shang, Fengjun; Deng, Shaojiang; Wang, Xiaojun
2017-01-01
Vehicle sensor networks (VSNs) are ushering in a promising future by enabling more intelligent transportation systems and providing a more efficient driving experience. However, because of their inherent openness, VSNs are subject to a large number of potential security threats. Although various authentication schemes have been proposed for addressing security problems, they are not suitable for VSN applications because of their high computation and communication costs. Chuang and Lee have developed a trust-extended authentication mechanism (TEAM) for vehicle-to-vehicle communication using a transitive trust relationship, which they claim can resist various attacks. However, it fails to counter internal attacks because of the utilization of a shared secret key. In this paper, to eliminate the vulnerability of TEAM, an enhanced privacy-preserving authentication scheme for VSNs is constructed. The security of our proposed scheme is proven under the random oracle model based on the assumption of the computational Diffie–Hellman problem. PMID:29292792
Automated Environment Generation for Software Model Checking
NASA Technical Reports Server (NTRS)
Tkachuk, Oksana; Dwyer, Matthew B.; Pasareanu, Corina S.
2003-01-01
A key problem in model checking open systems is environment modeling (i.e., representing the behavior of the execution context of the system under analysis). Software systems are fundamentally open since their behavior is dependent on patterns of invocation of system components and values defined outside the system but referenced within the system. Whether reasoning about the behavior of whole programs or about program components, an abstract model of the environment can be essential in enabling sufficiently precise yet tractable verification. In this paper, we describe an approach to generating environments of Java program fragments. This approach integrates formally specified assumptions about environment behavior with sound abstractions of environment implementations to form a model of the environment. The approach is implemented in the Bandera Environment Generator (BEG) which we describe along with our experience using BEG to reason about properties of several non-trivial concurrent Java programs.
Urban land teleconnections and sustainability
Seto, Karen C.; Reenberg, Anette; Boone, Christopher G.; Fragkias, Michail; Haase, Dagmar; Langanke, Tobias; Marcotullio, Peter; Munroe, Darla K.; Olah, Branislav; Simon, David
2012-01-01
This paper introduces urban land teleconnections as a conceptual framework that explicitly links land changes to underlying urbanization dynamics. We illustrate how three key themes that are currently addressed separately in the urban sustainability and land change literatures can lead to incorrect conclusions and misleading results when they are not examined jointly: the traditional system of land classification that is based on discrete categories and reinforces the false idea of a rural–urban dichotomy; the spatial quantification of land change that is based on place-based relationships, ignoring the connections between distant places, especially between urban functions and rural land uses; and the implicit assumptions about path dependency and sequential land changes that underlie current conceptualizations of land transitions. We then examine several environmental “grand challenges” and discuss how urban land teleconnections could help research communities frame scientific inquiries. Finally, we point to existing analytical approaches that can be used to advance development and application of the concept. PMID:22550174
Neural correlates of fixation duration in natural reading: Evidence from fixation-related fMRI.
Henderson, John M; Choi, Wonil; Luke, Steven G; Desai, Rutvik H
2015-10-01
A key assumption of current theories of natural reading is that fixation duration reflects underlying attentional, language, and cognitive processes associated with text comprehension. The neurocognitive correlates of this relationship are currently unknown. To investigate this relationship, we compared neural activation associated with fixation duration in passage reading and a pseudo-reading control condition. The results showed that fixation duration was associated with activation in oculomotor and language areas during text reading. Fixation duration during pseudo-reading, on the other hand, showed greater involvement of frontal control regions, suggesting flexibility and task dependency of the eye movement network. Consistent with current models, these results provide support for the hypothesis that fixation duration in reading reflects attentional engagement and language processing. The results also demonstrate that fixation-related fMRI provides a method for investigating the neurocognitive bases of natural reading. Copyright © 2015 Elsevier Inc. All rights reserved.
Cost-effectiveness of human papillomavirus vaccination in the United States.
Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Markowitz, Lauri E
2008-02-01
We describe a simplified model, based on the current economic and health effects of human papillomavirus (HPV), to estimate the cost-effectiveness of HPV vaccination of 12-year-old girls in the United States. Under base-case parameter values, the estimated cost per quality-adjusted life year gained by vaccination in the context of current cervical cancer screening practices in the United States ranged from $3,906 to $14,723 (2005 US dollars), depending on factors such as whether herd immunity effects were assumed; the types of HPV targeted by the vaccine; and whether the benefits of preventing anal, vaginal, vulvar, and oropharyngeal cancers were included. The results of our simplified model were consistent with published studies based on more complex models when key assumptions were similar. This consistency is reassuring because models of varying complexity will be essential tools for policy makers in the development of optimal HPV vaccination strategies.
Solomon, M Z; DeJong, W
1986-01-01
In the absence of a cure or vaccine for acquired immune deficiency syndrome (AIDS) educational and social marketing efforts to reduce the transmission of Human T-lymphotropic type III/lymphadenopathy-associated virus (HTLV-III/LAV) are currently our best hope for controlling the disease. Since 1983, the Centers for Disease Control (CDC) has funded a series of research studies to determine whether education efforts can successfully motivate the adoption of key behaviors relevant to the control of a variety of sexually transmitted diseases (STDs). Analysis of the first two studies which are now completed, and preliminary data from a third study, have documented dramatic changes in behavior, knowledge, and attitudes among clients in inner-city public health clinics. The authors describe the principles and underlying assumptions that have guided the design of their STD initiatives, drawing special attention to the implications for AIDS health education efforts.
Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.
Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M
2017-02-27
RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Ozone chemical equilibrium in the extended mesopause under the nighttime conditions
NASA Astrophysics Data System (ADS)
Belikovich, M. V.; Kulikov, M. Yu.; Grygalashvyly, M.; Sonnemann, G. R.; Ermakova, T. S.; Nechaev, A. A.; Feigin, A. M.
2018-01-01
For retrieval of atomic oxygen and atomic hydrogen via ozone observations in the extended mesopause region (∼70-100 km) under nighttime conditions, an assumption on photochemical equilibrium of ozone is often used in research. In this work, an assumption on chemical equilibrium of ozone near mesopause region during nighttime is proofed. We examine 3D chemistry-transport model (CTM) annual calculations and determine the ratio between the correct (modeled) distributions of the O3 density and its equilibrium values depending on the altitude, latitude, and season. The results show that the retrieval of atomic oxygen and atomic hydrogen distributions using an assumption on ozone chemical equilibrium may lead to large errors below ∼81-87 km. We give simple and clear semi-empirical criterion for practical utilization of the lower boundary of the area with ozone's chemical equilibrium near mesopause.
Stress testing hydrologic models using bottom-up climate change assessment
NASA Astrophysics Data System (ADS)
Stephens, C.; Johnson, F.; Marshall, L. A.
2017-12-01
Bottom-up climate change assessment is a promising approach for understanding the vulnerability of a system to potential future changes. The technique has been utilised successfully in risk-based assessments of future flood severity and infrastructure vulnerability. We find that it is also an ideal tool for assessing hydrologic model performance in a changing climate. In this study, we applied bottom-up climate change to compare the performance of two different hydrologic models (an event-based and a continuous model) under increasingly severe climate change scenarios. This allowed us to diagnose likely sources of future prediction error in the two models. The climate change scenarios were based on projections for southern Australia, which indicate drier average conditions with increased extreme rainfall intensities. We found that the key weakness in using the event-based model to simulate drier future scenarios was the model's inability to dynamically account for changing antecedent conditions. This led to increased variability in model performance relative to the continuous model, which automatically accounts for the wetness of a catchment through dynamic simulation of water storages. When considering more intense future rainfall events, representation of antecedent conditions became less important than assumptions around (non)linearity in catchment response. The linear continuous model we applied may underestimate flood risk in a future climate with greater extreme rainfall intensity. In contrast with the recommendations of previous studies, this indicates that continuous simulation is not necessarily the key to robust flood modelling under climate change. By applying bottom-up climate change assessment, we were able to understand systematic changes in relative model performance under changing conditions and deduce likely sources of prediction error in the two models.
Exploring harmonization between integrated assessment and capacity expansion models
NASA Astrophysics Data System (ADS)
Iyer, G.; Brown, M.; Cohen, S.; Macknick, J.; Patel, P.; Wise, M. A.; Horing, J.
2017-12-01
Forward-looking quantitative models of the electric sector are extensively used to provide science-based strategic decision support to national, international and private-sector entities. Given that these models are used to inform a wide-range of stakeholders and influence policy decisions, it is vital to examine how the models' underlying data and structure influence their outcomes. We conduct several experiments harmonizing key model characteristics between ReEDS—an electric sector only model, and GCAM—an integrated assessment model—to understand how different degrees of harmonization impact model outcomes. ReEDS has high spatial, temporal, and process detail but lacks electricity demand elasticity and endogenous representations of other economic sectors, while GCAM has internally consistent representations of energy (including the electric sector), agriculture, and land-use systems but relatively aggregate representations of the factors influencing electric sector investments . We vary the degree of harmonization in electricity demand, fuel prices, technology costs and performance, and variable renewable energy resource characteristics. We then identify the prominent sources of divergence in key outputs (electricity capacity, generation, and price) across the models and study how the convergence between models can be improved with permutations of harmonized characteristics. The remaining inconsistencies help to establish how differences in the models' underlying data, construction, perspective, and methodology play into each model's outcome. There are three broad contributions of this work. First, our study provides a framework to link models with similar scope but different resolutions. Second, our work provides insight into how the harmonization of assumptions contributes to a unified and robust portrayal of the US electricity sector under various potential futures. Finally, our study enhances the understanding of the influence of structural uncertainty on consistency of outcomes.
Existence of Torsional Solitons in a Beam Model of Suspension Bridge
NASA Astrophysics Data System (ADS)
Benci, Vieri; Fortunato, Donato; Gazzola, Filippo
2017-11-01
This paper studies the existence of solitons, namely stable solitary waves, in an idealized suspension bridge. The bridge is modeled as an unbounded degenerate plate, that is, a central beam with cross sections, and displays two degrees of freedom: the vertical displacement of the beam and the torsional angles of the cross sections. Under fairly general assumptions, we prove the existence of solitons. Under the additional assumption of large tension in the sustaining cables, we prove that these solitons have a nontrivial torsional component. This appears relevant for security since several suspension bridges collapsed due to torsional oscillations.
Mohiuddin, Syed; Busby, John; Savović, Jelena; Richards, Alison; Northstone, Kate; Hollingworth, William; Donovan, Jenny L; Vasilakis, Christos
2017-01-01
Objectives Overcrowding in the emergency department (ED) is common in the UK as in other countries worldwide. Computer simulation is one approach used for understanding the causes of ED overcrowding and assessing the likely impact of changes to the delivery of emergency care. However, little is known about the usefulness of computer simulation for analysis of ED patient flow. We undertook a systematic review to investigate the different computer simulation methods and their contribution for analysis of patient flow within EDs in the UK. Methods We searched eight bibliographic databases (MEDLINE, EMBASE, COCHRANE, WEB OF SCIENCE, CINAHL, INSPEC, MATHSCINET and ACM DIGITAL LIBRARY) from date of inception until 31 March 2016. Studies were included if they used a computer simulation method to capture patient progression within the ED of an established UK National Health Service hospital. Studies were summarised in terms of simulation method, key assumptions, input and output data, conclusions drawn and implementation of results. Results Twenty-one studies met the inclusion criteria. Of these, 19 used discrete event simulation and 2 used system dynamics models. The purpose of many of these studies (n=16; 76%) centred on service redesign. Seven studies (33%) provided no details about the ED being investigated. Most studies (n=18; 86%) used specific hospital models of ED patient flow. Overall, the reporting of underlying modelling assumptions was poor. Nineteen studies (90%) considered patient waiting or throughput times as the key outcome measure. Twelve studies (57%) reported some involvement of stakeholders in the simulation study. However, only three studies (14%) reported on the implementation of changes supported by the simulation. Conclusions We found that computer simulation can provide a means to pretest changes to ED care delivery before implementation in a safe and efficient manner. However, the evidence base is small and poorly developed. There are some methodological, data, stakeholder, implementation and reporting issues, which must be addressed by future studies. PMID:28487459
ERIC Educational Resources Information Center
Finch, Holmes; Stage, Alan Kirk; Monahan, Patrick
2008-01-01
A primary assumption underlying several of the common methods for modeling item response data is unidimensionality, that is, test items tap into only one latent trait. This assumption can be assessed several ways, using nonlinear factor analysis and DETECT, a method based on the item conditional covariances. When multidimensionality is identified,…
Idiographic versus Nomothetic Approaches to Research in Organizations.
1981-07-01
alternative methodologic assumption based on intensive examination of one or a few cases under the theoretic assumption of dynamic interactionism is, with...phenomenological studies the researcher may not enter the actual setting but instead examines symbolic meanings as they constitute themselves in...B. Interactionism in personality from a historical perspective. Psychological Bulletin, 1974, 81, 1026-l148. Elashoff, J.D.; & Thoresen, C.E
ERIC Educational Resources Information Center
Jang, Hyesuk
2014-01-01
This study aims to evaluate a multidimensional latent trait model to determine how well the model works in various empirical contexts. Contrary to the assumption of these latent trait models that the traits are normally distributed, situations in which the latent trait is not shaped with a normal distribution may occur (Sass et al, 2008; Woods…
ERIC Educational Resources Information Center
Lix, Lisa M.; And Others
1996-01-01
Meta-analytic techniques were used to summarize the statistical robustness literature on Type I error properties of alternatives to the one-way analysis of variance "F" test. The James (1951) and Welch (1951) tests performed best under violations of the variance homogeneity assumption, although their use is not always appropriate. (SLD)
A general method for handling missing binary outcome data in randomized controlled trials.
Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen
2014-12-01
The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. We propose a sensitivity analysis where standard analyses, which could include 'missing = smoking' and 'last observation carried forward', are embedded in a wider class of models. We apply our general method to data from two smoking cessation trials. A total of 489 and 1758 participants from two smoking cessation trials. The abstinence outcomes were obtained using telephone interviews. The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. © 2014 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
Order information and free recall: evaluating the item-order hypothesis.
Mulligan, Neil W; Lozito, Jeffrey P
2007-05-01
The item-order hypothesis proposes that order information plays an important role in recall from long-term memory, and it is commonly used to account for the moderating effects of experimental design in memory research. Recent research (Engelkamp, Jahn, & Seiler, 2003; McDaniel, DeLosh, & Merritt, 2000) raises questions about the assumptions underlying the item-order hypothesis. Four experiments tested these assumptions by examining the relationship between free recall and order memory for lists of varying length (8, 16, or 24 unrelated words or pictures). Some groups were given standard free-recall instructions, other groups were explicitly instructed to use order information in free recall, and other groups were given free-recall tests intermixed with tests of order memory (order reconstruction). The results for short lists were consistent with the assumptions of the item-order account. For intermediate-length lists, explicit order instructions and intermixed order tests made recall more reliant on order information, but under standard conditions, order information played little role in recall. For long lists, there was little evidence that order information contributed to recall. In sum, the assumptions of the item-order account held for short lists, received mixed support with intermediate lists, and received no support for longer lists.
A guide to Social Security money's worth issues.
Leimer, D R
1995-01-01
This article discusses some of the major issues associated with the question of whether workers receive their money's worth from the Social Security program. An effort is made to keep the discussion as nontechnical as possible, with explanations provided for many of the technical terms and concepts found in the money's worth literature. Major assumptions, key analytical methods, and money's worth measures used in the literature are also discussed. Finally, the key findings of money's worth studies are summarized, with some cautions concerning the limitations and appropriate usage of money's worth analyses.
Twenty-five years of HIV: lessons for low prevalence scenarios.
Sawires, Sharif; Birnbaum, Nina; Abu-Raddad, Laith; Szekeres, Greg; Gayle, Jacob
2009-07-01
During the initial quarter century since the discovery of HIV, international response has focused on high prevalence scenarios and concentrated epidemics. Until recently, the theoretical underpinnings of HIV prevention were largely based on these responses-the assumption that inadequate responses to concentrated epidemics within low prevalence populations could rapidly lead to generalized epidemics. The limits of these assumptions for HIV prevention in low prevalence scenarios have become evident. While examples of rapid HIV diffusion in once low prevalence scenarios exist, emergence of generalized epidemics are less likely for much of the world. This paper reviews several key issues and advances in biomedical and behavioural HIV prevention to date and highlights relevance to low prevalence scenarios.
Data Transmission Signal Design and Analysis
NASA Technical Reports Server (NTRS)
Moore, J. D.
1972-01-01
The error performances of several digital signaling methods are determined as a function of a specified signal-to-noise ratio. Results are obtained for Gaussian noise and impulse noise. Performance of a receiver for differentially encoded biphase signaling is obtained by extending the results of differential phase shift keying. The analysis presented obtains a closed-form answer through the use of some simplifying assumptions. The results give an insight into the analysis problem, however, the actual error performance may show a degradation because of the assumptions made in the analysis. Bipolar signaling decision-threshold selection is investigated. The optimum threshold depends on the signal-to-noise ratio and requires the use of an adaptive receiver.
NASA Astrophysics Data System (ADS)
Cosgrove, R. B.; Schultz, A.; Imamura, N.
2016-12-01
Although electrostatic equilibrium is always assumed in the ionosphere, there is no good theoretical or experimental justification for the assumption. In fact, recent theoretical investigations suggest that the electrostatic assumption may be grossly in error. If true, many commonly used modeling methods are placed in doubt. For example, the accepted method for calculating ionospheric conductance??field line integration??may be invalid. In this talk we briefly outline the theoretical research that places the electrostatic assumption in doubt, and then describe how comparison of ground magnetic field data with incoherent scatter radar (ISR) data can be used to test the electrostatic assumption in the ionosphere. We describe a recent experiment conducted for the purpose, where an array of magnetometers was temporalily installed under the Poker Flat AMISR.
Cracking the Code: Synchronizing Policy and Practice for Performance-Based Learning
ERIC Educational Resources Information Center
Patrick, Susan; Sturgis, Chris
2011-01-01
Performance-based learning is one of the keys to cracking open the assumptions that undergird the current educational codes, structures, and practices. By finally moving beyond the traditions of a time-based system, greater customized educational services can flourish, preparing more and more students for college and careers. This proposed policy…
The Curriculum Debate: Why It Is Important Today
ERIC Educational Resources Information Center
Tedesco, Juan Carlos; Opertti, Renato; Amadio, Massimo
2014-01-01
This article highlights some of the key issues in current discussions around curriculum, such as values education, inclusive education, competency-based approaches, soft and hard skills, and scientific and digital culture. It starts with the assumption that quality education for all is necessary to achieve social justice, and it looks at curricula…
ERIC Educational Resources Information Center
Fretwell, David H.; Lewis, Morgan V.; Deij, Arjen
The key issues, alternatives, and implications for developing countries to consider when designing systems to define occupational standards, related training standards, and assessments were analyzed. The analysis focused on the following issues: the rationale for development of standards; clarification of definitions, terminology, and assumptions;…
In an adverse outcome pathway (AOP), the target site dose participates in a molecular initiating event (MIE), which in turn triggers a sequence of key events leading to an adverse outcome (AO). Quantitative AOPs (QAOP) are needed if AOP characterization is to address risk as well...
The "Uncanny" Character of Race: An Exploration of UK Preparedness through Youth Performance
ERIC Educational Resources Information Center
Chakrabarty, Namita
2011-01-01
Performance is a key tool in emergency preparedness and the rehearsal of professional response, simultaneously raising questions about the practice of cultural assumptions in this context. Usually the actors in preparedness exercises are civil servants who perform the work of the nihilistic imagination in often-apocalyptic fictional scenarios,…
Revisiting Key Assumptions of the Reading Level Framework
ERIC Educational Resources Information Center
Halladay, Juliet L.
2012-01-01
Since Emmett Betts first devised a framework of independent, instructional, and frustration reading levels in the 1940s, these levels have played a large role in classroom assessment and instruction. It is important for teachers to have a deep understanding of the research that supports the reading level framework. This article identifies four key…
State Budgetary Assumptions. State Fiscal Brief No. 36.
ERIC Educational Resources Information Center
Boyd, Donald J.; Davis, Elizabeth I.
When states prepare their budgets, they usually base revenue and expenditure projections upon forecasts of national and state economic and demographic trends. This brief presents findings of a Center for the Study of the States survey that asked state budget offices what they were assuming for many key variables. The survey obtained 41 state…
The Links between Handwriting and Composing for Y6 Children
ERIC Educational Resources Information Center
Medwell, Jane; Strand, Steve; Wray, David
2009-01-01
Although handwriting is often considered a matter of presentation, a substantial body of international research suggests that the role of handwriting in children's composing has been neglected. Automaticity in handwriting is now seen as of key importance in composing but this proposition is relatively untested in the UK and the assumption has been…
ERIC Educational Resources Information Center
Bullough, Robert V., Jr.
2014-01-01
Drawing on insights from literary critic and theorist Kenneth Burke, this rhetorical analysis of "Preparing Teachers" (2010), a publication of the National Research Council, reveals then critiques' key assumptions that are shaping policies and current reform efforts in teacher education, including changes in U.S. teacher…
Including Overweight or Obese Students in Physical Education: A Social Ecological Constraint Model
ERIC Educational Resources Information Center
Li, Weidong; Rukavina, Paul
2012-01-01
In this review, we propose a social ecological constraint model to study inclusion of overweight or obese students in physical education by integrating key concepts and assumptions from ecological constraint theory in motor development and social ecological models in health promotion and behavior. The social ecological constraint model proposes…
Children and ICT European Initiatives and Policies on Protecting Children Online
ERIC Educational Resources Information Center
Wojniak, Justyna; Majorek, Marta
2016-01-01
The paper concerns the opportunities of use information and communication technologies for the education purposes. It presents key assumptions of the European Union policy concerning innovative methods of training and the prospects for their further development. As nowadays one can observe increasing activity of the children and young people in…
78 FR 26269 - Connect America Fund; High-Cost Universal Service Support
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-06
... the model platform, which is the basic framework for the model consisting of key assumptions about the... combination of competitive bidding and a new forward-looking model of the cost of constructing modern multi-purpose networks.'' Using the cost model to ``estimate the support necessary to serve areas where costs...
The Nature of Discourse in Transformative Learning: The Experience of Coming Out
ERIC Educational Resources Information Center
Kincaid, Timothy S.
2010-01-01
Mezirow's theory of transformative learning (1978, 1991, 1995, 1996, 1998, 2000, 2009) posits "perspective transformation" as a central learning process. Key to this transformation is the critical examination of the individual's deeply held assumptions and beliefs through discourse to examine new perspectives and test new ideas. The availability…
Enterprise Education Needs Enterprising Educators: A Case Study on Teacher Training Provision
ERIC Educational Resources Information Center
Penaluna, Kathryn; Penaluna, Andy; Usei, Caroline; Griffiths, Dinah
2015-01-01
Purpose: The purpose of this paper is to reflect upon the process that underpinned and informed the development and delivery of a "creativity-led" credit-bearing teacher training provision and to illuminate key factors of influence for the approaches to teaching and learning. Design/methodology/approach: Based on the assumption that…
ERIC Educational Resources Information Center
Burnett, I. Emett, Jr.; Pankake, Anita M.
Although much of the current school reform movement relies on the basic assumption of effective elementary school administration, insufficient effort has been made to synthesize key concepts found in organizational theory and management studies with relevant effective schools research findings. This paper attempts such a synthesis to help develop…
"Hard to Place": Multilingual Immigrant-Origin Students in Community Colleges
ERIC Educational Resources Information Center
Darbes, Tasha
2014-01-01
Assessment and placement practices at community colleges that are used to divide students into college ready, ESL and English remedial tracks play a key role in shaping the academic pathways of students (Hughes & Scott-Clayton, 2011). These assessments are based on assumptions of the nature of bilingualism and student needs and thus have…
ERIC Educational Resources Information Center
Coombs, W. Timothy; Holladay, Sherry J.
2002-01-01
Explains a comprehensive, prescriptive, situational approach for responding to crises and protecting organizational reputation: the situational crisis communication theory (SCCT). Notes undergraduate students read two crisis case studies from a set of 13 cases and responded to questions following the case. Validates a key assumption in SCCT and…
Multi-Object Filtering for Space Situational Awareness
2014-06-01
labelling such as the labelled multi- Bernoulli filter [27]. 3.2 Filter derivation: key modelling assumptions Ouf of the general filtering framework [14...radiation pressure in the canon- ball model has been taken into account, leading to the following acceleration: arad = −Fp · C A m E c AEarth |r− rSun| esatSun
Aggregation Bias and the Analysis of Necessary and Sufficient Conditions in fsQCA
ERIC Educational Resources Information Center
Braumoeller, Bear F.
2017-01-01
Fuzzy-set qualitative comparative analysis (fsQCA) has become one of the most prominent methods in the social sciences for capturing causal complexity, especially for scholars with small- and medium-"N" data sets. This research note explores two key assumptions in fsQCA's methodology for testing for necessary and sufficient…
Mental Retardation: Definition, Classification, and Systems of Supports. 10th Edition.
ERIC Educational Resources Information Center
Luckasson, Ruth; Borthwick-Duffy, Sharon; Buntinx, Wil H. E.; Coulter, David L.; Craig, Ellis M.; Reeve, Alya; Schalock, Robert L.; Snell, Martha E.; Spitalnik, Deborah M.; Spreat, Scott; Tasse, Marc J.
This manual, the 10th edition of a regularly published definition and classification work on mental retardation, presents five key assumptions upon which the definition of mental retardation is based and a theoretical model of five essential dimensions that explain mental retardation and how to use the companion system. These dimensions include…
Minimizing bias in biomass allometry: Model selection and log transformation of data
Joseph Mascaro; undefined undefined; Flint Hughes; Amanda Uowolo; Stefan A. Schnitzer
2011-01-01
Nonlinear regression is increasingly used to develop allometric equations for forest biomass estimation (i.e., as opposed to the raditional approach of log-transformation followed by linear regression). Most statistical software packages, however, assume additive errors by default, violating a key assumption of allometric theory and possibly producing spurious models....
Tending to Change: Toward a Situated Model of Affinity Spaces
ERIC Educational Resources Information Center
Bommarito, Dan
2014-01-01
The concept of affinity spaces, a theoretical construct used to analyze literate activity from a spatial perspective, has gained popularity among scholars of literacy studies and, particularly, video-game studies. This article seeks to expand current notions of affinity spaces by identifying key assumptions that have limited researchers'…
Deconstructing Technological Literacy: Opening a Window to Problem Solving
ERIC Educational Resources Information Center
Ward, Brandt
2015-01-01
In an environment of rapid and unpredictable change determined and directed by technologies that are constantly changing, the assumption that being technologically literate is the key to being a sustained, contributing life-long learner is well founded. However, technological literacy is seldom referred to or considered in academic arguments as a…
Performance of Healthy Participants on the Iowa Gambling Task
ERIC Educational Resources Information Center
Steingroever, Helen; Wetzels, Ruud; Horstmann, Annette; Neumann, Jane; Wagenmakers, Eric-Jan
2013-01-01
The Iowa Gambling Task (IGT; Bechara, Damasio, Damasio, & Anderson, 1994) is often used to assess decision-making deficits in clinical populations. The interpretation of the results hinges on 3 key assumptions: (a) healthy participants learn to prefer the good options over the bad options; (b) healthy participants show homogeneous choice behavior;…
Logic Brightens My Day: Evidence for Implicit Sensitivity to Logical Validity
ERIC Educational Resources Information Center
Trippas, Dries; Handley, Simon J.; Verde, Michael F.; Morsanyi, Kinga
2016-01-01
A key assumption of dual process theory is that reasoning is an explicit, effortful, deliberative process. The present study offers evidence for an implicit, possibly intuitive component of reasoning. Participants were shown sentences embedded in logically valid or invalid arguments. Participants were not asked to reason but instead rated the…
ERIC Educational Resources Information Center
Nager, Nancy, Ed.; Shapiro, Edna K., Ed.
This book reviews the history of the developmental-interactive approach, a formulation rooted in developmental psychology and educational practice, progressively informing educational thinking since the early 20th century. The book describes and analyzes key assumptions and assesses the compatibility of new theoretical approaches, focuses on…
A Formal Model of Capacity Limits in Working Memory
ERIC Educational Resources Information Center
Oberauer, Klaus; Kliegl, Reinhold
2006-01-01
A mathematical model of working-memory capacity limits is proposed on the key assumption of mutual interference between items in working memory. Interference is assumed to arise from overwriting of features shared by these items. The model was fit to time-accuracy data of memory-updating tasks from four experiments using nonlinear mixed effect…
1993-06-01
eligibles plus non-3ligibles equals total active employees) Several key concepts provide the foundation for SFAS 106 PRB accrual costs: Expected...will lump-together dependents and survivors along with current retirees in a category called simply "retirees". 40 EXHIBIT 8 PRB Acturial Assumptions a
76 FR 10810 - Public Workshop to Discuss Low-Level Radioactive Waste Management
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-28
... the environment. Development of the part 61 regulation in the early 1980s was based on several... there have been a number of developments that have called into question some of the key assumptions...-level radioactive wastes that did not exist at the time part 61 was promulgated. The developments...
77 FR 10401 - Low-Level Radioactive Waste Management Issues
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-22
... rather than active systems to limit and retard releases to the environment. Development of the 10 CFR... have been a number of developments that have called into question some of the key assumptions made in... radioactive wastes that did not exist at the time 10 CFR Part 61 was promulgated. The developments previously...
A Descriptive Analysis of Instructional Coaches' Data Use in Science
ERIC Educational Resources Information Center
Snodgrass Rangel, Virginia; Bell, Elizabeth R.; Monroy, Carlos
2017-01-01
A key assumption of accountability policies is that educators will use data to improve their instruction. In practice, however, data use is quite hard, and more districts are looking to instructional coaches to support their teachers. The purpose of this descriptive analysis is to examine how instructional coaches in elementary and middle school…
ERIC Educational Resources Information Center
Engstrom, Cathy McHugh
2008-01-01
The pedagogical assumptions and teaching practices of learning community models reflect exemplary conditions for learning, so using these models with unprepared students seems desirable and worthy of investigation. This chapter describes the key role of faculty in creating active, integrative learning experiences for students in basic skills…
Making the Most of Presidential Transitions
ERIC Educational Resources Information Center
Marchese, Theodore J.
2012-01-01
The time between a president's resignation and the next president's assumption of office--often a 12-to-18 month period--can be crucial for an institution. Between the winding down of an existing presidency and the successful launch of the next, there are all too many opportunities for lost momentum, frayed relationships, key departures, and…
ERIC Educational Resources Information Center
Yoshizawa, Go; Iwase, Mineyo; Okumoto, Motoko; Tahara, Keiichiro; Takahashi, Shingo
2016-01-01
A value-centered approach to science, technology and society (STS) education illuminates the need of reflexive and relational learning through communication and public engagement. Visualization is a key to represent and compare mental models such as assumptions, background theories and value systems that tacitly shape our own understanding,…
A Model for Effective Implementation of Flexible Programme Delivery
ERIC Educational Resources Information Center
Normand, Carey; Littlejohn, Allison; Falconer, Isobel
2008-01-01
The model developed here is the outcome of a project funded by the Quality Assurance Agency Scotland to support implementation of flexible programme delivery (FPD) in post-compulsory education. We highlight key features of FPD, including explicit and implicit assumptions about why flexibility is needed and the perceived barriers and solutions to…
Direct observation limits on antimatter gravitation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischler, Mark; Lykken, Joe; Roberts, Tom
2008-06-01
The proposed Antihydrogen Gravity experiment at Fermilab (P981) will directly measure the gravitational attraction g between antihydrogen and the Earth, with an accuracy of 1% or better. The following key question has been asked by the PAC: Is a possible 1% difference between g and g already ruled out by other evidence? This memo presents the key points of existing evidence, to answer whether such a difference is ruled out (a) on the basis of direct observational evidence; and/or (b) on the basis of indirect evidence, combined with reasoning based on strongly held theoretical assumptions. The bottom line is thatmore » there are no direct observations or measurements of gravitational asymmetry which address the antimatter sector. There is evidence which by indirect reasoning can be taken to rule out such a difference, but the analysis needed to draw that conclusion rests on models and assumptions which are in question for other reasons and are thus worth testing. There is no compelling evidence or theoretical reason to rule out such a difference at the 1% level.« less
Parker, Aimée; Pin, Carmen; Carding, Simon R.; Watson, Alastair J. M.; Byrne, Helen M.
2017-01-01
Our work addresses two key challenges, one biological and one methodological. First, we aim to understand how proliferation and cell migration rates in the intestinal epithelium are related under healthy, damaged (Ara-C treated) and recovering conditions, and how these relations can be used to identify mechanisms of repair and regeneration. We analyse new data, presented in more detail in a companion paper, in which BrdU/IdU cell-labelling experiments were performed under these respective conditions. Second, in considering how to more rigorously process these data and interpret them using mathematical models, we use a probabilistic, hierarchical approach. This provides a best-practice approach for systematically modelling and understanding the uncertainties that can otherwise undermine the generation of reliable conclusions—uncertainties in experimental measurement and treatment, difficult-to-compare mathematical models of underlying mechanisms, and unknown or unobserved parameters. Both spatially discrete and continuous mechanistic models are considered and related via hierarchical conditional probability assumptions. We perform model checks on both in-sample and out-of-sample datasets and use them to show how to test possible model improvements and assess the robustness of our conclusions. We conclude, for the present set of experiments, that a primarily proliferation-driven model suffices to predict labelled cell dynamics over most time-scales. PMID:28753601
Maclaren, Oliver J; Parker, Aimée; Pin, Carmen; Carding, Simon R; Watson, Alastair J M; Fletcher, Alexander G; Byrne, Helen M; Maini, Philip K
2017-07-01
Our work addresses two key challenges, one biological and one methodological. First, we aim to understand how proliferation and cell migration rates in the intestinal epithelium are related under healthy, damaged (Ara-C treated) and recovering conditions, and how these relations can be used to identify mechanisms of repair and regeneration. We analyse new data, presented in more detail in a companion paper, in which BrdU/IdU cell-labelling experiments were performed under these respective conditions. Second, in considering how to more rigorously process these data and interpret them using mathematical models, we use a probabilistic, hierarchical approach. This provides a best-practice approach for systematically modelling and understanding the uncertainties that can otherwise undermine the generation of reliable conclusions-uncertainties in experimental measurement and treatment, difficult-to-compare mathematical models of underlying mechanisms, and unknown or unobserved parameters. Both spatially discrete and continuous mechanistic models are considered and related via hierarchical conditional probability assumptions. We perform model checks on both in-sample and out-of-sample datasets and use them to show how to test possible model improvements and assess the robustness of our conclusions. We conclude, for the present set of experiments, that a primarily proliferation-driven model suffices to predict labelled cell dynamics over most time-scales.
Alternatives for discounting in the analysis of noninferiority trials.
Snapinn, Steven M
2004-05-01
Determining the efficacy of an experimental therapy relative to placebo on the basis of an active-control noninferiority trial requires reference to historical placebo-controlled trials. The validity of the resulting comparison depends on two key assumptions: assay sensitivity and constancy. Since the truth of these assumptions cannot be verified, it seems logical to raise the standard of evidence required to declare efficacy; this concept is referred to as discounting. It is not often recognized that two common design and analysis approaches, setting a noninferiority margin and requiring preservation of a fraction of the standard therapy's effect, are forms of discounting. The noninferiority margin is a particularly poor approach, since its degree of discounting depends on an irrelevant factor. Preservation of effect is more reasonable, but it addresses only the constancy assumption, not the issue of assay sensitivity. Gaining consensus on the most appropriate approach to the design and analysis of noninferiority trials will require a common understanding of the concept of discounting.
Spatial filtering, color constancy, and the color-changing dress.
Dixon, Erica L; Shapiro, Arthur G
2017-03-01
The color-changing dress is a 2015 Internet phenomenon in which the colors in a picture of a dress are reported as blue-black by some observers and white-gold by others. The standard explanation is that observers make different inferences about the lighting (is the dress in shadow or bright yellow light?); based on these inferences, observers make a best guess about the reflectance of the dress. The assumption underlying this explanation is that reflectance is the key to color constancy because reflectance alone remains invariant under changes in lighting conditions. Here, we demonstrate an alternative type of invariance across illumination conditions: An object that appears to vary in color under blue, white, or yellow illumination does not change color in the high spatial frequency region. A first approximation to color constancy can therefore be accomplished by a high-pass filter that retains enough low spatial frequency content so as to not to completely desaturate the object. We demonstrate the implications of this idea on the Rubik's cube illusion; on a shirt placed under white, yellow, and blue illuminants; and on spatially filtered images of the dress. We hypothesize that observer perceptions of the dress's color vary because of individual differences in how the visual system extracts high and low spatial frequency color content from the environment, and we demonstrate cross-group differences in average sensitivity to low spatial frequency patterns.
A more powerful test based on ratio distribution for retention noninferiority hypothesis.
Deng, Ling; Chen, Gang
2013-03-11
Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.
Avoided economic impacts of energy demand changes by 1.5 and 2 °C climate stabilization
NASA Astrophysics Data System (ADS)
Park, Chan; Fujimori, Shinichiro; Hasegawa, Tomoko; Takakura, Jun’ya; Takahashi, Kiyoshi; Hijioka, Yasuaki
2018-04-01
Energy demand associated with space heating and cooling is expected to be affected by climate change. There are several global projections of space heating and cooling use that take into consideration climate change, but a comprehensive uncertainty of socioeconomic and climate conditions, including a 1.5 °C global mean temperature change, has never been assessed. This paper shows the economic impact of changes in energy demand for space heating and cooling under multiple socioeconomic and climatic conditions. We use three shared socioeconomic pathways as socioeconomic conditions. For climate conditions, we use two representative concentration pathways that correspond to 4.0 °C and 2.0 °C scenarios, and a 1.5 °C scenario driven from the 2.0 °C scenario with assumption in conjunction with five general circulation models. We find that the economic impacts of climate change are largely affected by socioeconomic assumptions, and global GDP change rates range from +0.21% to ‑2.01% in 2100 under the 4.0 °C scenario, depending on the socioeconomic condition. Sensitivity analysis that differentiates the thresholds of heating and cooling degree days clarifies that the threshold is a strong factor that generates these differences. Meanwhile, the impact of the 1.5 °C is small regardless of socioeconomic assumptions (‑0.02% to ‑0.06%). The economic loss caused by differences in socioeconomic assumption under the 1.5 °C scenario is much smaller than that under the 2 °C scenario, which implies that stringent climate mitigation can work as a risk hedge to socioeconomic development diversity.
Macquarrie, K T B; Mayer, K U; Jin, B; Spiessl, S M
2010-03-01
Redox evolution in sparsely fractured crystalline rocks is a key, and largely unresolved, issue when assessing the geochemical suitability of deep geological repositories for nuclear waste. Redox zonation created by the influx of oxygenated waters has previously been simulated using reactive transport models that have incorporated a variety of processes, resulting in predictions for the depth of oxygen penetration that may vary greatly. An assessment and direct comparison of the various underlying conceptual models are therefore needed. In this work a reactive transport model that considers multiple processes in an integrated manner is used to investigate the ingress of oxygen for both single fracture and fracture zone scenarios. It is shown that the depth of dissolved oxygen migration is greatly influenced by the a priori assumptions that are made in the conceptual models. For example, the ability of oxygen to access and react with minerals in the rock matrix may be of paramount importance for single fracture conceptual models. For fracture zone systems, the abundance and reactivity of minerals within the fractures and thin matrix slabs between the fractures appear to provide key controls on O(2) attenuation. The findings point to the need for improved understanding of the coupling between the key transport-reaction feedbacks to determine which conceptual models are most suitable and to provide guidance for which parameters should be targeted in field and laboratory investigations. Copyright 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Perkis, David F.
Three published articles are presented which focus on enhancing various aspects of the energy supply chain. While each paper adopts a different methodology, all three combine engineering data and/or techniques with economic analysis to improve efficiency or policy design within energy markets. The first paper combines a chemical engineering plant design model with an economic assessment of product enhancements within an ethanol production facility. While a new chemical process is shown to achieve greater ethanol yields, the animal feed by-products are denatured and decrease in value due to the degradation of a key nutritional amino acid. Overall, yield increases outweigh any costs, providing additional value to firms adopting this process. The second paper uses a mixed integer linear model to assess the optimal location of cellulosic ethanol production facilities within the state of Indiana. Desired locations with low costs are linked to regions with high yield corn growth, as these areas provide an abundance of corn stover, a by-product of corn and a cellulosic source of ethanol. The third paper implements experimental economic methods to assess the effectiveness of policies intended to control prices in emissions permit markets. When utilizing reserve permit auctions as an alternative to setting explicit maximum prices, prices are elevated beyond the theoretical predictions of the model within the conditions of the experiment. The most likely cause of higher prices is the negotiating power provided to sellers by grandfathering permits as evidenced by higher than expected welfare gains to sellers. Before presenting the articles, a discussion is introduced regarding the role of assumptions used by economists. For each article, a key assumption is highlighted and the consequences of making a different assumption are provided. Whether the consequences are large or small, the benefits of elucidating our models with assumptions based on real world behaviors are clearly demonstrated.
Moseson, Heidi; Gerdts, Caitlin; Dehlendorf, Christine; Hiatt, Robert A; Vittinghoff, Eric
2017-12-21
The list experiment is a promising measurement tool for eliciting truthful responses to stigmatized or sensitive health behaviors. However, investigators may be hesitant to adopt the method due to previously untestable assumptions and the perceived inability to conduct multivariable analysis. With a recently developed statistical test that can detect the presence of a design effect - the absence of which is a central assumption of the list experiment method - we sought to test the validity of a list experiment conducted on self-reported abortion in Liberia. We also aim to introduce recently developed multivariable regression estimators for the analysis of list experiment data, to explore relationships between respondent characteristics and having had an abortion - an important component of understanding the experiences of women who have abortions. To test the null hypothesis of no design effect in the Liberian list experiment data, we calculated the percentage of each respondent "type," characterized by response to the control items, and compared these percentages across treatment and control groups with a Bonferroni-adjusted alpha criterion. We then implemented two least squares and two maximum likelihood models (four total), each representing different bias-variance trade-offs, to estimate the association between respondent characteristics and abortion. We find no clear evidence of a design effect in list experiment data from Liberia (p = 0.18), affirming the first key assumption of the method. Multivariable analyses suggest a negative association between education and history of abortion. The retrospective nature of measuring lifetime experience of abortion, however, complicates interpretation of results, as the timing and safety of a respondent's abortion may have influenced her ability to pursue an education. Our work demonstrates that multivariable analyses, as well as statistical testing of a key design assumption, are possible with list experiment data, although with important limitations when considering lifetime measures. We outline how to implement this methodology with list experiment data in future research.
2012-01-01
Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998
Nevo, Daniel; Nishihara, Reiko; Ogino, Shuji; Wang, Molin
2017-08-04
In the analysis of time-to-event data with multiple causes using a competing risks Cox model, often the cause of failure is unknown for some of the cases. The probability of a missing cause is typically assumed to be independent of the cause given the time of the event and covariates measured before the event occurred. In practice, however, the underlying missing-at-random assumption does not necessarily hold. Motivated by colorectal cancer molecular pathological epidemiology analysis, we develop a method to conduct valid analysis when additional auxiliary variables are available for cases only. We consider a weaker missing-at-random assumption, with missing pattern depending on the observed quantities, which include the auxiliary covariates. We use an informative likelihood approach that will yield consistent estimates even when the underlying model for missing cause of failure is misspecified. The superiority of our method over naive methods in finite samples is demonstrated by simulation study results. We illustrate the use of our method in an analysis of colorectal cancer data from the Nurses' Health Study cohort, where, apparently, the traditional missing-at-random assumption fails to hold.
Quasi-experimental study designs series-paper 7: assessing the assumptions.
Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian
2017-09-01
Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.
d’Uva, Teresa Bago; Lindeboom, Maarten; O’Donnell, Owen; van Doorslaer, Eddy
2011-01-01
We propose tests of the two assumptions under which anchoring vignettes identify heterogeneity in reporting of categorical evaluations. Systematic variation in the perceived difference between any two vignette states is sufficient to reject vignette equivalence. Response consistency - the respondent uses the same response scale to evaluate the vignette and herself – is testable given sufficiently comprehensive objective indicators that independently identify response scales. Both assumptions are rejected for reporting of cognitive and physical functioning in a sample of older English individuals, although a weaker test resting on less stringent assumptions does not reject response consistency for cognition. PMID:22184479
Differential and integral Jones matrices for a cholesteric
NASA Astrophysics Data System (ADS)
Nastyshyn, S. Yu.; Bolesta, I. M.; Tsybulia, S. A.; Lychkovskyy, E.; Yakovlev, M. Yu.; Ryzhov, Ye.; Vankevych, P. I.; Nastishin, Yu. A.
2018-05-01
Previous attempts to derive the differential Jones matrix (DJM, N ) by Jones [Jones, J. Opt. Soc. Am. 38, 671 (1948), 10.1364/JOSA.38.000671] for a twisted crystal and the integral Jones matrix (IJM, J ) by Chandrasekhar and Rao [Chandrasekhar and Rao, Acta Crystallogr. A 24, 445 (1968), 10.1107/S0567739468000902] for a cholesteric liquid crystal resulted in Jones matrices, which are valid for the spectral range except the selective light reflection band. We argue that the limitation of their validity is rooted in two key assumptions used in both approaches, namely, (1) local (nonrotated) DJM N0 and the elementary IJM J0 (to which the cholesteric is split) are those of a uniform nematic and (2) under rotation of the coordinate system, N0 and J0 obey the similarity transformation rule, namely, N =R N0R-1 and J =R J0R-1 , where R is the rotation matrix. We show that both of these assumptions are of limited applicability for a cholesteric, being justified only for weak twist. In our approach, the DJM and IJM are derived for a cholesteric without these assumptions. To derive the cholesteric DJM, we have established the relation between the diagonal form N0 d of N0 and Mauguin solutions [Mauguin, Bull. Soc. Fr. Mineral. Crystallogr. N° 3, 71 (1911)] of Maxwell equations for eigenwaves propagating in the cholesteric. Namely, the eigenvalues of N0 appear to be the wave numbers for the two eigenwaves propagating in the sample. Then the form of N0 reconstructs from its diagonal form N0 d. Our DJM and IJM, derived for a general case of any ellipticity value of the eigenwaves, correspond to an optically anisotropic plate possessing gyrotropy, linear birefringence, and Jones dichroism. In the limiting approximations of circularly polarized eigenwaves and that corresponding to the Mauguin regime, the DJM and IJM reduce to those known from the literature. We found that the form of the transformation rule for the local DJM N0 under rotation of the coordinate system depends on the regime of light propagation, being different from the similarity transformation rule alluded to above, but reduces to it at weak twist corresponding to the Mauguin regime.
Before Inflation and after Black Holes
NASA Astrophysics Data System (ADS)
Stoltenberg, Henry
This dissertation covers work from three research projects relating to the physics before the start of inflation and information after the decay of a black hole. For the first project, we analyze the cosmological role of terminal vacua in the string theory landscape, and point out that existing work on this topic makes very strong assumptions about the properties of the terminal vacua. We explore the implications of relaxing these assumptions (by including "arrival" as well as "departure" terminals) and demonstrate that the results in earlier work are highly sensitive to their assumption of no arrival terminals. We use our discussion to make some general points about tuning and initial conditions in cosmology. The second project is a discussion of the black hole information problem. Under certain conditions the black hole information puzzle and the (related) arguments that firewalls are a typical feature of black holes can break down. We first review the arguments of Almheiri, Marolf, Polchinski and Sully (AMPS) favoring firewalls, focusing on entanglements in a simple toy model for a black hole and the Hawking radiation. By introducing a large and inaccessible system entangled with the black hole (representing perhaps a de Sitter stretched horizon or inaccessible part of a landscape) we show complementarity can be restored and firewalls can be avoided throughout the black hole's evolution. Under these conditions black holes do not have an "information problem". We point out flaws in some of our earlier arguments that such entanglement might be generically present in some cosmological scenarios, and call out certain ways our picture may still be realized. The third project also examines the firewall argument. A fundamental limitation on the behavior of quantum entanglement known as "monogamy" plays a key role in the AMPS argument. Our goal is to study and apply many-body entanglement theory to consider the entanglement among different parts of Hawking radiation and black holes. Using the multipartite entanglement measure called negativity, we identify an example which differs from the AMPS accounting of quantum entanglement and might eliminate the need for a firewall. Specifically, we constructed a toy model for black hole decay which has different entanglement behavior than that assumed by AMPS. We discuss the additional steps that would be needed to bring lessons from our toy model to our understanding of realistic black holes.
Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M.
2015-01-01
Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms. PMID:26379316
Sabbah, Shai; Hawryshyn, Craig W
2013-07-04
Two competing theories have been advanced to explain the evolution of multiple cone classes in vertebrate eyes. These two theories have important, but different, implications for our understanding of the design and tuning of vertebrate visual systems. The 'contrast theory' proposes that multiple cone classes evolved in shallow-water fish to maximize the visual contrast of objects against diverse backgrounds. The competing 'flicker theory' states that multiple cone classes evolved to eliminate the light flicker inherent in shallow-water environments through antagonistic neural interactions, thereby enhancing object detection. However, the selective pressures that have driven the evolution of multiple cone classes remain largely obscure. We show that two critical assumptions of the flicker theory are violated. We found that the amplitude and temporal frequency of flicker vary over the visible spectrum, precluding its cancellation by simple antagonistic interactions between the output signals of cones. Moreover, we found that the temporal frequency of flicker matches the frequency where sensitivity is maximal in a wide range of fish taxa, suggesting that the flicker may actually enhance the detection of objects. Finally, using modeling of the chromatic contrast between fish pattern and background under flickering illumination, we found that the spectral sensitivity of cones in a cichlid focal species is optimally tuned to maximize the visual contrast between fish pattern and background, instead of to produce a flicker-free visual signal. The violation of its two critical assumptions substantially undermines support for the flicker theory as originally formulated. While this alone does not support the contrast theory, comparison of the contrast and flicker theories revealed that the visual system of our focal species was tuned as predicted by the contrast theory rather than by the flicker theory (or by some combination of the two). Thus, these findings challenge key assumptions of the flicker theory, leaving the contrast theory as the most parsimonious and tenable account of the evolution of multiple cone classes.
Mukumbang, Ferdinand C; van Belle, Sara; Marchal, Bruno; van Wyk, Brian
2016-01-01
The antiretroviral adherence club intervention was rolled out in primary health care facilities in the Western Cape province of South Africa to relieve clinic congestion, and improve retention in care, and treatment adherence in the face of growing patient loads. We adopted the realist evaluation approach to evaluate what aspects of antiretroviral club intervention works, for what sections of the patient population, and under which community and health systems contexts, to inform guidelines for scaling up of the intervention. In this article, we report on a step towards the development of a programme theory-the assumptions of programme designers and health service managers with regard to how and why the adherence club intervention is expected to achieve its goals and perceptions on how it has done so (or not). We adopted an exploratory qualitative research design. We conducted a document review of 12 documents on the design and implementation of the adherence club intervention, and key informant interviews with 12 purposively selected programme designers and managers. Thematic content analysis was used to identify themes attributed to the programme actors, context, mechanisms, and outcomes. Using the context-mechanism-outcome configurational tool, we provided an explanatory focus of how the adherence club intervention is roll-out and works guided by the realist perspective. We classified the assumptions of the adherence club designers and managers into the rollout, implementation, and utilisation of the adherence club programme, constructed around the providers, management/operational staff, and patients, respectively. Two rival theories were identified at the patient-perspective level. We used these perspectives to develop an initial programme theory of the adherence club intervention, which will be tested in a later phase. The perspectives of the programme designers and managers provided an important step towards developing an initial programme theory, which will guide our realist evaluation of the adherence club programme in South Africa.
Speed-of-light limitations in passive linear media
NASA Astrophysics Data System (ADS)
Welters, Aaron; Avniel, Yehuda; Johnson, Steven G.
2014-08-01
We prove that well-known speed-of-light restrictions on electromagnetic energy velocity can be extended to a new level of generality, encompassing even nonlocal chiral media in periodic geometries, while at the same time weakening the underlying assumptions to only passivity and linearity of the medium (either with a transparency window or with dissipation). As was also shown by other authors under more limiting assumptions, passivity alone is sufficient to guarantee causality and positivity of the energy density (with no thermodynamic assumptions). Our proof is general enough to include a very broad range of material properties, including anisotropy, bianisotropy (chirality), nonlocality, dispersion, periodicity, and even delta functions or similar generalized functions. We also show that the "dynamical energy density" used by some previous authors in dissipative media reduces to the standard Brillouin formula for dispersive energy density in a transparency window. The results in this paper are proved by exploiting deep results from linear-response theory, harmonic analysis, and functional analysis that had previously not been brought together in the context of electrodynamics.
Taylor, Michelle L.; Evans, Jonathan P.; Garcia-Gonzalez, Francisco
2013-01-01
A key assumption underpinning major models of sexual selection is the expectation that male sexual attractiveness is heritable. Surprisingly, however, empirical tests of this assumption are relatively scarce. Here we use a paternal full-sib/half-sib breeding design to examine genetic and environmental variation in male mating latency (a proxy for sexual attractiveness) and copulation duration in a natural population of Drosophila melanogaster. As our experimental design also involved the manipulation of the social environment within each full-sibling family, we were able to further test for the presence of genotype-by-environment interactions (GEIs) in these traits, which have the potential to compromise mate choice for genetic benefits. Our experimental manipulation of the social environment revealed plastic expression of both traits; males exposed to a rival male during the sensitive period of adult sexual maturation exhibited shorter mating latencies and longer copulation durations than those who matured in isolation. However, we found no evidence for GEIs, and no significant additive genetic variation underlying these traits in either environment. These results undermine the notion that the evolution of female choice rests on covariance between female preference and male displays, an expectation that underpins indirect benefit models such as the good genes and sexy sons hypotheses. However, our results may also indicate depletion of genetic variance in these traits in the natural population studied, thus supporting the expectation that traits closely aligned with reproductive fitness can exhibit low levels of additive genetic variance. PMID:24155948
Political Attitudes Develop Independently of Personality Traits
Hatemi, Peter K.; Verhulst, Brad
2015-01-01
The primary assumption within the recent personality and political orientations literature is that personality traits cause people to develop political attitudes. In contrast, research relying on traditional psychological and developmental theories suggests the relationship between most personality dimensions and political orientations are either not significant or weak. Research from behavioral genetics suggests the covariance between personality and political preferences is not causal, but due to a common, latent genetic factor that mutually influences both. The contradictory assumptions and findings from these research streams have yet to be resolved. This is in part due to the reliance on cross-sectional data and the lack of longitudinal genetically informative data. Here, using two independent longitudinal genetically informative samples, we examine the joint development of personality traits and attitude dimensions to explore the underlying causal mechanisms that drive the relationship between these features and provide a first step in resolving the causal question. We find change in personality over a ten-year period does not predict change in political attitudes, which does not support a causal relationship between personality traits and political attitudes as is frequently assumed. Rather, political attitudes are often more stable than the key personality traits assumed to be predicting them. Finally, the results from our genetic models find that no additional variance is accounted for by the causal pathway from personality traits to political attitudes. Our findings remain consistent with the original construction of the five-factor model of personality and developmental theories on attitude formation, but challenge recent work in this area. PMID:25734580
Anderson, G F; Han, K C; Miller, R H; Johns, M E
1997-01-01
OBJECTIVE: To compare three methods of computing the national requirements for otolaryngologists in 1994 and 2010. DATA SOURCES: Three large HMOs, a Delphi panel, the Bureau of Health Professions (BHPr), and published sources. STUDY DESIGN: Three established methods of computing requirements for otolaryngologists were compared: managed care, demand-utilization, and adjusted needs assessment. Under the managed care model, a published method based on reviewing staffing patterns in HMOs was modified to estimate the number of otolaryngologists. We obtained from BHPr estimates of work force projections from their demand model. To estimate the adjusted needs model, we convened a Delphi panel of otolaryngologists using the methodology developed by the Graduate Medical Education National Advisory Committee (GMENAC). DATA COLLECTION/EXTRACTION METHODS: Not applicable. PRINCIPAL FINDINGS: Wide variation in the estimated number of otolaryngologists required occurred across the three methods. Within each model it was possible to alter the requirements for otolaryngologists significantly by changing one or more of the key assumptions. The managed care model has a potential to obtain the most reliable estimates because it reflects actual staffing patterns in institutions that are attempting to use physicians efficiently. CONCLUSIONS: Estimates of work force requirements can vary considerably if one or more assumptions are changed. In order for the managed care approach to be useful for actual decision making concerning the appropriate number of otolaryngologists required, additional research on the methodology used to extrapolate the results to the general population is necessary. PMID:9180613
Jackson, Charlotte; Mangtani, Punam; Hawker, Jeremy; Olowokure, Babatunde; Vynnycky, Emilia
2014-01-01
School closure is a potential intervention during an influenza pandemic and has been investigated in many modelling studies. To systematically review the effects of school closure on influenza outbreaks as predicted by simulation studies. We searched Medline and Embase for relevant modelling studies published by the end of October 2012, and handsearched key journals. We summarised the predicted effects of school closure on the peak and cumulative attack rates and the duration of the epidemic. We investigated how these predictions depended on the basic reproduction number, the timing and duration of closure and the assumed effects of school closures on contact patterns. School closures were usually predicted to be most effective if they caused large reductions in contact, if transmissibility was low (e.g. a basic reproduction number <2), and if attack rates were higher in children than in adults. The cumulative attack rate was expected to change less than the peak, but quantitative predictions varied (e.g. reductions in the peak were frequently 20-60% but some studies predicted >90% reductions or even increases under certain assumptions). This partly reflected differences in model assumptions, such as those regarding population contact patterns. Simulation studies suggest that school closure can be a useful control measure during an influenza pandemic, particularly for reducing peak demand on health services. However, it is difficult to accurately quantify the likely benefits. Further studies of the effects of reactive school closures on contact patterns are needed to improve the accuracy of model predictions.
Political attitudes develop independently of personality traits.
Hatemi, Peter K; Verhulst, Brad
2015-01-01
The primary assumption within the recent personality and political orientations literature is that personality traits cause people to develop political attitudes. In contrast, research relying on traditional psychological and developmental theories suggests the relationship between most personality dimensions and political orientations are either not significant or weak. Research from behavioral genetics suggests the covariance between personality and political preferences is not causal, but due to a common, latent genetic factor that mutually influences both. The contradictory assumptions and findings from these research streams have yet to be resolved. This is in part due to the reliance on cross-sectional data and the lack of longitudinal genetically informative data. Here, using two independent longitudinal genetically informative samples, we examine the joint development of personality traits and attitude dimensions to explore the underlying causal mechanisms that drive the relationship between these features and provide a first step in resolving the causal question. We find change in personality over a ten-year period does not predict change in political attitudes, which does not support a causal relationship between personality traits and political attitudes as is frequently assumed. Rather, political attitudes are often more stable than the key personality traits assumed to be predicting them. Finally, the results from our genetic models find that no additional variance is accounted for by the causal pathway from personality traits to political attitudes. Our findings remain consistent with the original construction of the five-factor model of personality and developmental theories on attitude formation, but challenge recent work in this area.
Improving Life-Cycle Cost Management of Spacecraft Missions
NASA Technical Reports Server (NTRS)
Clardy, Dennon
2010-01-01
This presentation will explore the results of a recent NASA Life-Cycle Cost study and how project managers can use the findings and recommendations to improve planning and coordination early in the formulation cycle and avoid common pitfalls resulting in cost overruns. The typical NASA space science mission will exceed both the initial estimated and the confirmed life-cycle costs by the end of the mission. In a fixed-budget environment, these overruns translate to delays in starting or launching future missions, or in the worst case can lead to cancelled missions. Some of these overruns are due to issues outside the control of the project; others are due to the unpredictable problems (unknown unknowns) that can affect any development project. However, a recent study of life-cycle cost growth by the Discovery and New Frontiers Program Office identified a number of areas that are within the scope of project management to address. The study also found that the majority of the underlying causes for cost overruns are embedded in the project approach during the formulation and early design phases, but the actual impacts typically are not experienced until late in the project life cycle. Thus, project management focus in key areas such as integrated schedule development, management structure and contractor communications processes, heritage and technology assumptions, and operations planning, can be used to validate initial cost assumptions and set in place management processes to avoid the common pitfalls resulting in cost overruns.
Metabolic theory and elevational diversity of vertebrate ectotherms.
McCain, Christy M; Sanders, Nathan J
2010-02-01
The Metabolic Theory of Ecology (MTE) posits that the temperature-dependent kinetics of metabolism shape broad-scale patterns of biodiversity. Here we test whether the MTE accounts for patterns of diversity using 102 elevational diversity gradients of reptiles and amphibians. In particular, we examined the support for the two key predictions of the MTE: that the reciprocal of absolute temperature (1/kT) and diversity are linearly related and that the slope of that relationship is -0.65. We also tested two underlying assumptions of the MTE in cases with appropriate data, namely, that abundance is invariant among samples, and that behavioral thermoregulation influences the MTE predictions. We found that few studies supported the predictions of the MTE for the relationship between environmental temperature and elevational diversity using previous methods on individual gradients and using meta-analysis. The predominant relationship was curvilinear, and the slopes were steeper than predicted. In analyses of individual gradients, only 6% followed the MTE predictions in the strictest application, and 25% in the broadest. We found violations of the assumption of invariant abundances in all five test cases. All four herpetofaunal groups, regardless of behavioral thermoregulatory abilities, demonstrated poor fits to the MTE predictions. Even when arid gradients are removed, ameliorating the potential effects of water limitation, the MTE did not account for herpetofaunal elevational diversity. We conclude that an interplay of factors shapes elevational diversity gradients rather than the simple kinetics of biochemical reactions.
How "Boundaryless" Are the Careers of High Potentials, Key Experts and Average Performers?
ERIC Educational Resources Information Center
Dries, Nicky; Van Acker, Frederik; Verbruggen, Marijke
2012-01-01
The talent management literature declares talent management a prime concern for HRM professionals while the careers literature calls talent management archaic. Three sets of assumptions identified through comparative review of both streams of the literature were tested in a large-scale survey (n = 941). We found more support for the assumptions…
Shifting Landscapes, Changing Assumptions Reshape Higher Ed
ERIC Educational Resources Information Center
DiSalvio, Philip
2012-01-01
In 1852, Massachusetts became the first state to provide all its citizens access to a free public education. Over the next 66 years, every other state made the same guarantee. Massachusetts may again be a geographic hotspot that signals the displacement from the old to the new. Just as key sectors of the American economy have experienced huge and…
SP-100 lithium thaw design, analysis, and testing
NASA Astrophysics Data System (ADS)
Choe, Hwang; Schrag, Michael R.; Koonce, David R.; Gamble, Robert E.; Halfen, Frank J.; Kirpich, Aaron S.
1993-01-01
The thaw design has been established for the 100 kWe SP-100 Space Reactor Power System. System thaw/startup analysis has confirmed that all system thaw requirements are met, and that rethaw and restart can be easily accomplished with this design. In addition, a series of lithium thaw characterization tests has been performed, confirming key design assumptions.
ERIC Educational Resources Information Center
Wing, Coady; Cook, Thomas D.
2013-01-01
The sharp regression discontinuity design (RDD) has three key weaknesses compared to the randomized clinical trial (RCT). It has lower statistical power, it is more dependent on statistical modeling assumptions, and its treatment effect estimates are limited to the narrow subpopulation of cases immediately around the cutoff, which is rarely of…
A Practice-Based Critique of English as a Lingua Franca
ERIC Educational Resources Information Center
Park, Joseph Sung-Yul; Wee, Lionel
2011-01-01
This paper identifies several key issues that have emerged through the debate over English as a Lingua Franca (ELF), and suggests a practice-based perspective--which treats language not as a fixed system but as an emergent product of speakers' practices--as a guide for reconsidering some fundamental assumptions of the ELF research project. In…
Renewable Energy Technical Potential | Geospatial Data Science | NREL
Technical Potential Renewable Energy Technical Potential The renewable energy technical potential level from Resource to Technical to Economic to Market. The benefit of assessing technical potential is potential-resource, technical, economic, and market-as shown in the graphic with key assumptions. Technical
Rethinking the Preparation of HPE Teachers: Ruminations on Knowledge, Identity, and Ways of Thinking
ERIC Educational Resources Information Center
Tinning, Richard
2004-01-01
This paper explores assumptions about essential knowledge in degree programs that have traditionally prepared teachers of physical education, and discusses the question of what sort of teacher education is necessary or desirable to prepare teachers for the new Health & Physical Education (HPE) key learning area. I argue that the curriculum of the…
Is the European (Active) Citizenship Ideal Fostering Inclusion within the Union? A Critical Review
ERIC Educational Resources Information Center
Milana, Marcella
2008-01-01
This article reviews: (1) the establishment and functioning of EU citizenship: (2) the resulting perception of education for European active citizenship; and (3) the question of its adequacy for enhancing democratic values and practices within the Union. Key policy documents produced by the EU help to unfold the basic assumptions on which…
Lenses on Reading: An Introduction to Theories and Models. Second Edition
ERIC Educational Resources Information Center
Tracey, Diane H.; Morrow, Lesley Mandel
2012-01-01
This widely adopted text explores key theories and models that frame reading instruction and research. Readers learn why theory matters in designing and implementing high-quality instruction and research; how to critically evaluate the assumptions and beliefs that guide their own work; and what can be gained by looking at reading through multiple…
ERIC Educational Resources Information Center
Dame, Frederick William
This book explores Jean-Jacques Rousseau's educational philosophy, as expressed in his key works, and applies that philosophy to adult education and revolution. The titles and topics of the book's seven chapters are as follows: (1) "L'Invitation: Raison d'Etre" (prelude, statement, significance, the process, assumptions and limitations);…
Facing the Challenges of Educational Reform in the Arab World
ERIC Educational Resources Information Center
Karami Akkary, Rima
2014-01-01
This paper pinpoints and discusses key aspects of the current approaches to school reform in the Arab world against the backdrop of what is accepted as the best practice in the international literature on effective school reform and educational change. The main goal of the paper is to highlight deeply ingrained assumptions and practices that are…
Rethinking Leadership Learning in Postgraduate Public Management Programmes
ERIC Educational Resources Information Center
Briggs, Ian; Raine, John
2013-01-01
Leadership forms a key component of the curriculum of most Master of Public Administration and other public management programmes, usually doing so on the basis of assumptions that leadership is (a) both a subject and a responsibility that all such students might expect to embrace in the course of their careers; and (b) in some respects at least,…
Long term load forecasting accuracy in electric utility integrated resource planning
Carvallo, Juan Pablo; Larsen, Peter H.; Sanstad, Alan H.; ...
2018-05-23
Forecasts of electricity consumption and peak demand over time horizons of one or two decades are a key element in electric utilities’ meeting their core objective and obligation to ensure reliable and affordable electricity supplies for their customers while complying with a range of energy and environmental regulations and policies. These forecasts are an important input to integrated resource planning (IRP) processes involving utilities, regulators, and other stake-holders. Despite their importance, however, there has been little analysis of long term utility load forecasting accuracy. We conduct a retrospective analysis of long term load forecasts on twelve Western U. S. electricmore » utilities in the mid-2000s to find that most overestimated both energy consumption and peak demand growth. A key reason for this was the use of assumptions that led to an overestimation of economic growth. We find that the complexity of forecast methods and the accuracy of these forecasts are mildly correlated. In addition, sensitivity and risk analysis of load growth and its implications for capacity expansion were not well integrated with subsequent implementation. As a result, we review changes in the utilities load forecasting methods over the subsequent decade, and discuss the policy implications of long term load forecast inaccuracy and its underlying causes.« less
Long term load forecasting accuracy in electric utility integrated resource planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carvallo, Juan Pablo; Larsen, Peter H.; Sanstad, Alan H.
Forecasts of electricity consumption and peak demand over time horizons of one or two decades are a key element in electric utilities’ meeting their core objective and obligation to ensure reliable and affordable electricity supplies for their customers while complying with a range of energy and environmental regulations and policies. These forecasts are an important input to integrated resource planning (IRP) processes involving utilities, regulators, and other stake-holders. Despite their importance, however, there has been little analysis of long term utility load forecasting accuracy. We conduct a retrospective analysis of long term load forecasts on twelve Western U. S. electricmore » utilities in the mid-2000s to find that most overestimated both energy consumption and peak demand growth. A key reason for this was the use of assumptions that led to an overestimation of economic growth. We find that the complexity of forecast methods and the accuracy of these forecasts are mildly correlated. In addition, sensitivity and risk analysis of load growth and its implications for capacity expansion were not well integrated with subsequent implementation. As a result, we review changes in the utilities load forecasting methods over the subsequent decade, and discuss the policy implications of long term load forecast inaccuracy and its underlying causes.« less
A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.
Wang, Shangping; Ye, Jian; Zhang, Yaling
2018-01-01
Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.
A keyword searchable attribute-based encryption scheme with attribute update for cloud storage
Wang, Shangping; Zhang, Yaling
2018-01-01
Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption. PMID:29795577
Statistical Issues for Calculating Reentry Hazards
NASA Technical Reports Server (NTRS)
Matney, Mark; Bacon, John
2016-01-01
A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.
Frameworks for evaluating health research capacity strengthening: a qualitative study
2013-01-01
Background Health research capacity strengthening (RCS) projects are often complex and hard to evaluate. In order to inform health RCS evaluation efforts, we aimed to describe and compare key characteristics of existing health RCS evaluation frameworks: their process of development, purpose, target users, structure, content and coverage of important evaluation issues. A secondary objective was to explore what use had been made of the ESSENCE framework, which attempts to address one such issue: harmonising the evaluation requirements of different funders. Methods We identified and analysed health RCS evaluation frameworks published by seven funding agencies between 2004 and 2012, using a mixed methods approach involving structured qualitative analyses of documents, a stakeholder survey and consultations with key contacts in health RCS funding agencies. Results The frameworks were intended for use predominantly by the organisations themselves, and most were oriented primarily towards funders’ internal organisational performance requirements. The frameworks made limited reference to theories that specifically concern RCS. Generic devices, such as logical frameworks, were typically used to document activities, outputs and outcomes, but with little emphasis on exploring underlying assumptions or contextual constraints. Usage of the ESSENCE framework appeared limited. Conclusions We believe that there is scope for improving frameworks through the incorporation of more accessible information about how to do evaluation in practice; greater involvement of stakeholders, following evaluation capacity building principles; greater emphasis on explaining underlying rationales of frameworks; and structuring frameworks so that they separate generic and project-specific aspects of health RCS evaluation. The third and fourth of these improvements might assist harmonisation. PMID:24330628
Assessing Omitted Confounder Bias in Multilevel Mediation Models.
Tofighi, Davood; Kelley, Ken
2016-01-01
To draw valid inference about an indirect effect in a mediation model, there must be no omitted confounders. No omitted confounders means that there are no common causes of hypothesized causal relationships. When the no-omitted-confounder assumption is violated, inference about indirect effects can be severely biased and the results potentially misleading. Despite the increasing attention to address confounder bias in single-level mediation, this topic has received little attention in the growing area of multilevel mediation analysis. A formidable challenge is that the no-omitted-confounder assumption is untestable. To address this challenge, we first analytically examined the biasing effects of potential violations of this critical assumption in a two-level mediation model with random intercepts and slopes, in which all the variables are measured at Level 1. Our analytic results show that omitting a Level 1 confounder can yield misleading results about key quantities of interest, such as Level 1 and Level 2 indirect effects. Second, we proposed a sensitivity analysis technique to assess the extent to which potential violation of the no-omitted-confounder assumption might invalidate or alter the conclusions about the indirect effects observed. We illustrated the methods using an empirical study and provided computer code so that researchers can implement the methods discussed.
A Study of Crowd Ability and its Influence on Crowdsourced Evaluation of Design Concepts
2014-05-01
identifies the experts from the crowd, under the assumptions that ( 1 ) experts do exist and (2) only experts have consistent evaluations. These assumptions...for design evaluation tasks . Keywords: crowdsourcing, design evaluation, sparse evaluation ability, machine learning ∗Corresponding author. 1 ...intelligence” of a much larger crowd of people with diverse backgrounds [ 1 ]. Crowdsourced evaluation, or the delegation of an eval- uation task to a
A utility-theoretic model for QALYs and willingness to pay.
Klose, Thomas
2003-01-01
Despite the widespread use of quality-adjusted life years (QALY) in economic evaluation studies, their utility-theoretic foundation remains unclear. A model for preferences over health, money, and time is presented in this paper. Under the usual assumptions of the original QALY-model, an additive separable presentation of the utilities in different periods exists. In contrast to the usual assumption that QALY-weights do solely depend on aspects of health-related quality of life, wealth-standardized QALY-weights might vary with the wealth level in the presented extension of the original QALY-model resulting in an inconsistent measurement of QALYs. Further assumptions are presented to make the measurement of QALYs consistent with lifetime preferences over health and money. Even under these strict assumptions, QALYs and WTP (which also can be defined in this utility-theoretic model) are not equivalent preference-based measures of the effects of health technologies on an individual level. The results suggest that the individual WTP per QALY can depend on the magnitude of the QALY-gain as well as on the disease burden, when health influences the marginal utility of wealth. Further research seems to be indicated on this structural aspect of preferences over health and wealth and to quantify its impact. Copyright 2002 John Wiley & Sons, Ltd.
Unsaturation of vapour pressure inside leaves of two conifer species
Cernusak, Lucas A.; Ubierna, Nerea; Jenkins, Michael W.; ...
2018-05-16
Stomatal conductance (g s) impacts both photosynthesis and transpiration, and is therefore fundamental to the global carbon and water cycles, food production, and ecosystem services. Mathematical models provide the primary means of analysing this important leaf gas exchange parameter. A nearly universal assumption in such models is that the vapour pressure inside leaves (e i) remains saturated under all conditions. The validity of this assumption has not been well tested, because so far e i cannot be measured directly. Here, we test this assumption using a novel technique, based on coupled measurements of leaf gas exchange and the stable isotopemore » compositions of CO 2 and water vapour passing over the leaf. We applied this technique to mature individuals of two semiarid conifer species. In both species, e i routinely dropped below saturation when leaves were exposed to moderate to high air vapour pressure deficits. Typical values of relative humidity in the intercellular air spaces were as low 0.9 in Juniperus monosperma and 0.8 in Pinus edulis. These departures of e i from saturation caused significant biases in calculations of g s and the intercellular CO 2 concentration. Thus, our results refute the longstanding assumption of saturated vapour pressure in plant leaves under all conditions.« less
The Embedding Problem for Markov Models of Nucleotide Substitution
Verbyla, Klara L.; Yap, Von Bing; Pahwa, Anuj; Shao, Yunli; Huttley, Gavin A.
2013-01-01
Continuous-time Markov processes are often used to model the complex natural phenomenon of sequence evolution. To make the process of sequence evolution tractable, simplifying assumptions are often made about the sequence properties and the underlying process. The validity of one such assumption, time-homogeneity, has never been explored. Violations of this assumption can be found by identifying non-embeddability. A process is non-embeddable if it can not be embedded in a continuous time-homogeneous Markov process. In this study, non-embeddability was demonstrated to exist when modelling sequence evolution with Markov models. Evidence of non-embeddability was found primarily at the third codon position, possibly resulting from changes in mutation rate over time. Outgroup edges and those with a deeper time depth were found to have an increased probability of the underlying process being non-embeddable. Overall, low levels of non-embeddability were detected when examining individual edges of triads across a diverse set of alignments. Subsequent phylogenetic reconstruction analyses demonstrated that non-embeddability could impact on the correct prediction of phylogenies, but at extremely low levels. Despite the existence of non-embeddability, there is minimal evidence of violations of the local time homogeneity assumption and consequently the impact is likely to be minor. PMID:23935949
Maths for medications: an analytical exemplar of the social organization of nurses' knowledge.
Dyjur, Louise; Rankin, Janet; Lane, Annette
2011-07-01
Within the literature that circulates in the discourses organizing nursing education, there are embedded assumptions that link student performance on maths examinations to safe medication practices. These assumptions are rooted historically. They fundamentally shape educational approaches assumed to support safe practice and protect patients from nursing error. Here, we apply an institutional ethnographic lens to the body of literature that both supports and critiques the emphasis on numeracy skills and medication safety. We use this form of inquiry to open an alternate interrogation of these practices. Our main argument posits that numeracy skills serve as powerful distraction for both students and teachers. We suggest that they operate under specious claims of safety and objectivity. As nurse educators, we are captured by taken-for-granted understandings of practices intended to produce safety. We contend that some of these practices are not congruent with how competency actually unfolds in the everyday world of nursing practice. Ontologically grounded in the materiality of work processes, we suggest there is a serious disjuncture between educators' assessment and evaluation work where it links into broad nursing assumptions about medication work. These underlying assumptions and work processes produce contradictory tensions for students, teachers and nurses in direct practice. © 2011 Blackwell Publishing Ltd.