Sample records for conditional independence assumption

  1. Posterior Predictive Checks for Conditional Independence between Response Time and Accuracy

    ERIC Educational Resources Information Center

    Bolsinova, Maria; Tijmstra, Jesper

    2016-01-01

    Conditional independence (CI) between response time and response accuracy is a fundamental assumption of many joint models for time and accuracy used in educational measurement. In this study, posterior predictive checks (PPCs) are proposed for testing this assumption. These PPCs are based on three discrepancy measures reflecting different…

  2. Naïve and Robust: Class-Conditional Independence in Human Classification Learning

    ERIC Educational Resources Information Center

    Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D.

    2018-01-01

    Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…

  3. A Conditional Joint Modeling Approach for Locally Dependent Item Responses and Response Times

    ERIC Educational Resources Information Center

    Meng, Xiang-Bin; Tao, Jian; Chang, Hua-Hua

    2015-01-01

    The assumption of conditional independence between the responses and the response times (RTs) for a given person is common in RT modeling. However, when the speed of a test taker is not constant, this assumption will be violated. In this article we propose a conditional joint model for item responses and RTs, which incorporates a covariance…

  4. The quest for conditional independence in prospectivity modeling: weights-of-evidence, boost weights-of-evidence, and logistic regression

    NASA Astrophysics Data System (ADS)

    Schaeben, Helmut; Semmler, Georg

    2016-09-01

    The objective of prospectivity modeling is prediction of the conditional probability of the presence T = 1 or absence T = 0 of a target T given favorable or prohibitive predictors B, or construction of a two classes 0,1 classification of T. A special case of logistic regression called weights-of-evidence (WofE) is geologists' favorite method of prospectivity modeling due to its apparent simplicity. However, the numerical simplicity is deceiving as it is implied by the severe mathematical modeling assumption of joint conditional independence of all predictors given the target. General weights of evidence are explicitly introduced which are as simple to estimate as conventional weights, i.e., by counting, but do not require conditional independence. Complementary to the regression view is the classification view on prospectivity modeling. Boosting is the construction of a strong classifier from a set of weak classifiers. From the regression point of view it is closely related to logistic regression. Boost weights-of-evidence (BoostWofE) was introduced into prospectivity modeling to counterbalance violations of the assumption of conditional independence even though relaxation of modeling assumptions with respect to weak classifiers was not the (initial) purpose of boosting. In the original publication of BoostWofE a fabricated dataset was used to "validate" this approach. Using the same fabricated dataset it is shown that BoostWofE cannot generally compensate lacking conditional independence whatever the consecutively processing order of predictors. Thus the alleged features of BoostWofE are disproved by way of counterexamples, while theoretical findings are confirmed that logistic regression including interaction terms can exactly compensate violations of joint conditional independence if the predictors are indicators.

  5. Feature inference with uncertain categorization: Re-assessing Anderson's rational model.

    PubMed

    Konovalova, Elizaveta; Le Mens, Gaël

    2017-09-18

    A key function of categories is to help predictions about unobserved features of objects. At the same time, humans are often in situations where the categories of the objects they perceive are uncertain. In an influential paper, Anderson (Psychological Review, 98(3), 409-429, 1991) proposed a rational model for feature inferences with uncertain categorization. A crucial feature of this model is the conditional independence assumption-it assumes that the within category feature correlation is zero. In prior research, this model has been found to provide a poor fit to participants' inferences. This evidence is restricted to task environments inconsistent with the conditional independence assumption. Currently available evidence thus provides little information about how this model would fit participants' inferences in a setting with conditional independence. In four experiments based on a novel paradigm and one experiment based on an existing paradigm, we assess the performance of Anderson's model under conditional independence. We find that this model predicts participants' inferences better than competing models. One model assumes that inferences are based on just the most likely category. The second model is insensitive to categories but sensitive to overall feature correlation. The performance of Anderson's model is evidence that inferences were influenced not only by the more likely category but also by the other candidate category. Our findings suggest that a version of Anderson's model which relaxes the conditional independence assumption will likely perform well in environments characterized by within-category feature correlation.

  6. The Comparative Performance of Conditional Independence Indices

    ERIC Educational Resources Information Center

    Kim, Doyoung; De Ayala, R. J.; Ferdous, Abdullah A.; Nering, Michael L.

    2011-01-01

    To realize the benefits of item response theory (IRT), one must have model-data fit. One facet of a model-data fit investigation involves assessing the tenability of the conditional item independence (CII) assumption. In this Monte Carlo study, the comparative performance of 10 indices for identifying conditional item dependence is assessed. The…

  7. Does McNemar's test compare the sensitivities and specificities of two diagnostic tests?

    PubMed

    Kim, Soeun; Lee, Woojoo

    2017-02-01

    McNemar's test is often used in practice to compare the sensitivities and specificities for the evaluation of two diagnostic tests. For correct evaluation of accuracy, an intuitive recommendation is to test the diseased and the non-diseased groups separately so that the sensitivities can be compared among the diseased, and specificities can be compared among the healthy group of people. This paper provides a rigorous theoretical framework for this argument and study the validity of McNemar's test regardless of the conditional independence assumption. We derive McNemar's test statistic under the null hypothesis considering both assumptions of conditional independence and conditional dependence. We then perform power analyses to show how the result is affected by the amount of the conditional dependence under alternative hypothesis.

  8. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE PAGES

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    2017-01-09

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  9. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  10. Conditional Dependence between Response Time and Accuracy: An Overview of its Possible Sources and Directions for Distinguishing between Them

    PubMed Central

    Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan; De Boeck, Paul

    2017-01-01

    With the widespread use of computerized tests in educational measurement and cognitive psychology, registration of response times has become feasible in many applications. Considering these response times helps provide a more complete picture of the performance and characteristics of persons beyond what is available based on response accuracy alone. Statistical models such as the hierarchical model (van der Linden, 2007) have been proposed that jointly model response time and accuracy. However, these models make restrictive assumptions about the response processes (RPs) that may not be realistic in practice, such as the assumption that the association between response time and accuracy is fully explained by taking speed and ability into account (conditional independence). Assuming conditional independence forces one to ignore that many relevant individual differences may play a role in the RPs beyond overall speed and ability. In this paper, we critically consider the assumption of conditional independence and the important ways in which it may be violated in practice from a substantive perspective. We consider both conditional dependences that may arise when all persons attempt to solve the items in similar ways (homogeneous RPs) and those that may be due to persons differing in fundamental ways in how they deal with the items (heterogeneous processes). The paper provides an overview of what we can learn from observed conditional dependences. We argue that explaining and modeling these differences in the RPs is crucial to increase both the validity of measurement and our understanding of the relevant RPs. PMID:28261136

  11. Bayes Nets and Babies: Infants' Developing Statistical Reasoning Abilities and Their Representation of Causal Knowledge

    ERIC Educational Resources Information Center

    Sobel, David M.; Kirkham, Natasha Z.

    2007-01-01

    A fundamental assumption of the causal graphical model framework is the Markov assumption, which posits that learners can discriminate between two events that are dependent because of a direct causal relation between them and two events that are independent conditional on the value of another event(s). Sobel and Kirkham (2006) demonstrated that…

  12. "A violation of the conditional independence assumption in the two-high-threshold Model of recognition memory": Correction to Chen, Starns, and Rotello (2015).

    PubMed

    2016-01-01

    Reports an error in "A violation of the conditional independence assumption in the two-high-threshold model of recognition memory" by Tina Chen, Jeffrey J. Starns and Caren M. Rotello (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015[Jul], Vol 41[4], 1215-1222). In the article, Chen et al. compared three models: a continuous signal detection model (SDT), a standard two-high-threshold discrete-state model in which detect states always led to correct responses (2HT), and a full-mapping version of the 2HT model in which detect states could lead to either correct or incorrect responses. After publication, Rani Moran (personal communication, April 21, 2015) identified two errors that impact the reported fit statistics for the Bayesian information criterion (BIC) metric of all models as well as the Akaike information criterion (AIC) results for the full-mapping model. The errors are described in the erratum. (The following abstract of the original article appeared in record 2014-56216-001.) The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are independent of the probability that an item yields a particular state (e.g., both strong and weak items that are detected as old have the same probability of producing a highest-confidence "old" response). We tested this conditional independence assumption by presenting nouns 1, 2, or 4 times. To maximize the strength of some items, "superstrong" items were repeated 4 times and encoded in conjunction with pleasantness, imageability, anagram, and survival processing tasks. The 2HT model failed to simultaneously capture the response rate data for all item classes, demonstrating that the data violated the conditional independence assumption. In contrast, a Gaussian signal detection model, which posits that the level of confidence that an item is "old" or "new" is a function of its continuous strength value, provided a good account of the data. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Exploring the Robustness of a Unidimensional Item Response Theory Model with Empirically Multidimensional Data

    ERIC Educational Resources Information Center

    Anderson, Daniel; Kahn, Joshua D.; Tindal, Gerald

    2017-01-01

    Unidimensionality and local independence are two common assumptions of item response theory. The former implies that all items measure a common latent trait, while the latter implies that responses are independent, conditional on respondents' location on the latent trait. Yet, few tests are truly unidimensional. Unmodeled dimensions may result in…

  14. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.

    PubMed

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-09-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.

  15. On the derivation of approximations to cellular automata models and the assumption of independence.

    PubMed

    Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V

    2014-07-01

    Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. (In)validity of the constant field and constant currents assumptions in theories of ion transport.

    PubMed Central

    Syganow, A; von Kitzing, E

    1999-01-01

    Constant electric fields and constant ion currents are often considered in theories of ion transport. Therefore, it is important to understand the validity of these helpful concepts. The constant field assumption requires that the charge density of permeant ions and flexible polar groups is virtually voltage independent. We present analytic relations that indicate the conditions under which the constant field approximation applies. Barrier models are frequently fitted to experimental current-voltage curves to describe ion transport. These models are based on three fundamental characteristics: a constant electric field, negligible concerted motions of ions inside the channel (an ion can enter only an empty site), and concentration-independent energy profiles. An analysis of those fundamental assumptions of barrier models shows that those approximations require large barriers because the electrostatic interaction is strong and has a long range. In the constant currents assumption, the current of each permeating ion species is considered to be constant throughout the channel; thus ion pairing is explicitly ignored. In inhomogeneous steady-state systems, the association rate constant determines the strength of ion pairing. Among permeable ions, however, the ion association rate constants are not small, according to modern diffusion-limited reaction rate theories. A mathematical formulation of a constant currents condition indicates that ion pairing very likely has an effect but does not dominate ion transport. PMID:9929480

  17. Multivariate Generalizations of Student's t-Distribution. ONR Technical Report. [Biometric Lab Report No. 90-3.

    ERIC Educational Resources Information Center

    Gibbons, Robert D.; And Others

    In the process of developing a conditionally-dependent item response theory (IRT) model, the problem arose of modeling an underlying multivariate normal (MVN) response process with general correlation among the items. Without the assumption of conditional independence, for which the underlying MVN cdf takes on comparatively simple forms and can be…

  18. Testing the linearity and independence assumptions of the generalized matching law for reinforcer magnitude: a residual meta-analysis.

    PubMed

    Cording, Jacinta R; McLean, Anthony P; Grace, Randolph C

    2011-05-01

    We conducted a residual meta-analysis to test the assumptions of the generalized matching law that effects of relative reinforcer magnitude on response allocation in concurrent schedules can be described by a power function and are independent from the effects of relative reinforcer rate. We identified five studies which varied magnitude ratios over at least four levels and six studies in which reinforcer rate and magnitude ratios were varied factorially. The generalized matching law provided a reasonably good description of the data, accounting for 77.1% and 90.1% of the variance in the two sets of studies. Results of polynomial regressions showed that there were no systematic patterns in pooled residuals as a function of predicted log response ratios for data sets in which relative magnitude was varied. For data sets in which relative rate and magnitude were varied factorially, there was a significant negative cubic pattern in the pooled residuals, suggesting that obtained response allocation was less extreme than predicted for conditions with extreme predicted values. However, subsequent analyses showed that this result was associated with results from conditions in one study in which the product of the rate and magnitude ratios was 63:1, and in which response allocation may have been attenuated by a ceiling effect. When data from these conditions were omitted, there were no significant components in the residuals. Although the number of available studies was small, results provide tentative support for the assumptions of the generalized matching law that effects of reinforcer magnitude ratios on choice can be described by a power function and are independent from reinforcer rate ratios. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  20. Economic burden of chronic conditions among households in Myanmar: the case of angina and asthma.

    PubMed

    Htet, Soe; Alam, Khurshid; Mahal, Ajay

    2015-11-01

    Non-communicable diseases (NCDs) are becoming a major source of the national disease burden in Myanmar with potentially serious economic implications. Using data on 5484 households from the World Health Survey (WHS), this study assessed the household-level economic burden of two chronic conditions, angina and asthma, in Myanmar. Propensity score matching (PSM) and coarsened exact matching (CEM) methods were used to compare household out-of-pocket (OOP) spending, catastrophic and impoverishment effects, reliance on borrowing or asset sales to finance OOP healthcare payments and employment among households reporting a member with angina (asthma) to matched households, with and without adjusting for comorbidities. Sensitivity analyses were carried out to assess the impacts of alternative assumptions on common support and potential violations of the assumption of independence of households being angina (asthma) affected and household economic outcomes, conditional on the variables used for matching (conditional independence). Households with angina (asthma) reported greater OOP spending (angina: range I$1.94-I$4.31; asthma: range I$1.53-I$2.01) (I$1 = 125.09 Myanmar Kyats; I$=International Dollar) almost half of which was spending on medicines; higher rates of catastrophic spending based on a 20% threshold ratio of OOP to total household spending (angina: range 6-7%; asthma: range 3-5%); greater reliance on borrowing and sale of assets to finance healthcare (angina: range 12-14%; asthma: range 40-49%); increased medical impoverishment and lower employment rates than matched controls. There were no statistically differences in OOP expenses for inpatient care between angina-affected (asthma-affected) households and matched controls. Our results were generally robust to multiple methods of matching. However, conclusions for medical impoverishment impacts were not robust to potential violations of the conditional independence assumption. Myanmar is expanding public spending on health and has recently launched an innovative programme for supporting hospital-based care for poor households. Our findings suggest the need for interventions to address OOP expenses associated with outpatient care (including drugs) for chronic conditions in Myanmar's population. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2014; all rights reserved.

  1. Assessing measurement error in surveys using latent class analysis: application to self-reported illicit drug use in data from the Iranian Mental Health Survey.

    PubMed

    Khalagi, Kazem; Mansournia, Mohammad Ali; Rahimi-Movaghar, Afarin; Nourijelyani, Keramat; Amin-Esmaeili, Masoumeh; Hajebi, Ahmad; Sharif, Vandad; Radgoodarzi, Reza; Hefazi, Mitra; Motevalian, Abbas

    2016-01-01

    Latent class analysis (LCA) is a method of assessing and correcting measurement error in surveys. The local independence assumption in LCA assumes that indicators are independent from each other condition on the latent variable. Violation of this assumption leads to unreliable results. We explored this issue by using LCA to estimate the prevalence of illicit drug use in the Iranian Mental Health Survey. The following three indicators were included in the LCA models: five or more instances of using any illicit drug in the past 12 months (indicator A), any use of any illicit drug in the past 12 months (indicator B), and the self-perceived need of treatment services or having received treatment for a substance use disorder in the past 12 months (indicator C). Gender was also used in all LCA models as a grouping variable. One LCA model using indicators A and B, as well as 10 different LCA models using indicators A, B, and C, were fitted to the data. The three models that had the best fit to the data included the following correlations between indicators: (AC and AB), (AC), and (AC, BC, and AB). The estimated prevalence of illicit drug use based on these three models was 28.9%, 6.2% and 42.2%, respectively. None of these models completely controlled for violation of the local independence assumption. In order to perform unbiased estimations using the LCA approach, the factors violating the local independence assumption (behaviorally correlated error, bivocality, and latent heterogeneity) should be completely taken into account in all models using well-known methods.

  2. Instrumental variables as bias amplifiers with general outcome and confounding.

    PubMed

    Ding, P; VanderWeele, T J; Robins, J M

    2017-06-01

    Drawing causal inference with observational studies is the central pillar of many disciplines. One sufficient condition for identifying the causal effect is that the treatment-outcome relationship is unconfounded conditional on the observed covariates. It is often believed that the more covariates we condition on, the more plausible this unconfoundedness assumption is. This belief has had a huge impact on practical causal inference, suggesting that we should adjust for all pretreatment covariates. However, when there is unmeasured confounding between the treatment and outcome, estimators adjusting for some pretreatment covariate might have greater bias than estimators without adjusting for this covariate. This kind of covariate is called a bias amplifier, and includes instrumental variables that are independent of the confounder, and affect the outcome only through the treatment. Previously, theoretical results for this phenomenon have been established only for linear models. We fill in this gap in the literature by providing a general theory, showing that this phenomenon happens under a wide class of models satisfying certain monotonicity assumptions. We further show that when the treatment follows an additive or multiplicative model conditional on the instrumental variable and the confounder, these monotonicity assumptions can be interpreted as the signs of the arrows of the causal diagrams.

  3. On an additive partial correlation operator and nonparametric estimation of graphical models.

    PubMed

    Lee, Kuang-Yao; Li, Bing; Zhao, Hongyu

    2016-09-01

    We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance.

  4. On an additive partial correlation operator and nonparametric estimation of graphical models

    PubMed Central

    Li, Bing; Zhao, Hongyu

    2016-01-01

    Abstract We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance. PMID:29422689

  5. Conditional Covariance-Based Subtest Selection for DIMTEST

    ERIC Educational Resources Information Center

    Froelich, Amy G.; Habing, Brian

    2008-01-01

    DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…

  6. Invited commentary: boundless science--putting natural direct and indirect effects in a clearer empirical context.

    PubMed

    Naimi, Ashley I

    2015-07-15

    Epidemiologists are increasingly using natural effects for applied mediation analyses, yet 1 key identifying assumption is unintuitive and subject to some controversy. In this issue of the Journal, Jiang and VanderWeele (Am J Epidemiol. 2015;182(2):105-108) formalize the conditions under which the difference method can be used to estimate natural indirect effects. In this commentary, I discuss implications of the controversial "cross-worlds" independence assumption needed to identify natural effects. I argue that with a binary mediator, a simple modification of the authors' approach will provide bounds for natural direct and indirect effect estimates that better reflect the capacity of the available data to support empirical statements on the presence of mediated effects. I discuss complications encountered when odds ratios are used to decompose effects, as well as the implications of incorrectly assuming the absence of exposure-induced mediator-outcome confounders. I note that the former problem can be entirely resolved using collapsible measures of effect, such as risk ratios. In the Appendix, I use previous derivations for natural direct effect bounds on the risk difference scale to provide bounds on the odds ratio scale that accommodate 1) uncertainty due to the cross-world independence assumption and 2) uncertainty due to the cross-world independence assumption and the presence of exposure-induced mediator-outcome confounders. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Sparse covariance estimation in heterogeneous samples*

    PubMed Central

    Rodríguez, Abel; Lenkoski, Alex; Dobra, Adrian

    2015-01-01

    Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the pre-Euro era. PMID:26925189

  8. Relative Reinforcer Rates and Magnitudes Do Not Control Concurrent Choice Independently

    ERIC Educational Resources Information Center

    Elliffe, Douglas; Davison, Michael; Landon, Jason

    2008-01-01

    One assumption of the matching approach to choice is that different independent variables control choice independently of each other. We tested this assumption for reinforcer rate and magnitude in an extensive parametric experiment. Five pigeons responded for food reinforcement on switching-key concurrent variable-interval variable-interval…

  9. The assessment of knowledge and learning in competence spaces: The gain-loss model for dependent skills.

    PubMed

    Anselmi, Pasquale; Stefanutti, Luca; de Chiusole, Debora; Robusto, Egidio

    2017-11-01

    The gain-loss model (GaLoM) is a formal model for assessing knowledge and learning. In its original formulation, the GaLoM assumes independence among the skills. Such an assumption is not reasonable in several domains, in which some preliminary knowledge is the foundation for other knowledge. This paper presents an extension of the GaLoM to the case in which the skills are not independent, and the dependence relation among them is described by a well-graded competence space. The probability of mastering skill s at the pretest is conditional on the presence of all skills on which s depends. The probabilities of gaining or losing skill s when moving from pretest to posttest are conditional on the mastery of s at the pretest, and on the presence at the posttest of all skills on which s depends. Two formulations of the model are presented, in which the learning path is allowed to change from pretest to posttest or not. A simulation study shows that models based on the true competence space obtain a better fit than models based on false competence spaces, and are also characterized by a higher assessment accuracy. An empirical application shows that models based on pedagogically sound assumptions about the dependencies among the skills obtain a better fit than models assuming independence among the skills. © 2017 The British Psychological Society.

  10. Modeling Local Item Dependence Due to Common Test Format with a Multidimensional Rasch Model

    ERIC Educational Resources Information Center

    Baghaei, Purya; Aryadoust, Vahid

    2015-01-01

    Research shows that test method can exert a significant impact on test takers' performance and thereby contaminate test scores. We argue that common test method can exert the same effect as common stimuli and violate the conditional independence assumption of item response theory models because, in general, subsets of items which have a shared…

  11. A Violation of the Conditional Independence Assumption in the Two-High-Threshold Model of Recognition Memory

    ERIC Educational Resources Information Center

    Chen, Tina; Starns, Jeffrey J.; Rotello, Caren M.

    2015-01-01

    The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are…

  12. The Robustness of LOGIST and BILOG IRT Estimation Programs to Violations of Local Independence.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    One of the important underlying assumptions of all item response theory (IRT) models is that of local independence. This assumption requires that the response to an item on a test not be influenced by the response to any other items. This assumption is often taken for granted, with little or no scrutiny of the response process required to answer…

  13. A Unimodal Model for Double Observer Distance Sampling Surveys.

    PubMed

    Becker, Earl F; Christ, Aaron M

    2015-01-01

    Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.

  14. Quantifying Wrinkle Features of Thin Membrane Structures

    NASA Technical Reports Server (NTRS)

    Jacobson, Mindy B.; Iwasa, Takashi; Naton, M. C.

    2004-01-01

    For future micro-systems utilizing membrane based structures, quantified predictions of wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made. This work demonstrates that critical assumptions include: effects of gravity, supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 m x 02 m membrane is treated as a structural material with non-negligible bending stiffness. Finite element modeling is used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density and thickness for cases with differing initial conditions are independent of assumed initial conditions. In addition, analysis results indicate that the relationship between wrinkle amplitude scale (W/t) and structural scale (L/t) is independent of the nonlinear relationship between thickness and stiffness.

  15. Geodesic least squares regression on information manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verdoolaege, Geert, E-mail: geert.verdoolaege@ugent.be

    We present a novel regression method targeted at situations with significant uncertainty on both the dependent and independent variables or with non-Gaussian distribution models. Unlike the classic regression model, the conditional distribution of the response variable suggested by the data need not be the same as the modeled distribution. Instead they are matched by minimizing the Rao geodesic distance between them. This yields a more flexible regression method that is less constrained by the assumptions imposed through the regression model. As an example, we demonstrate the improved resistance of our method against some flawed model assumptions and we apply thismore » to scaling laws in magnetic confinement fusion.« less

  16. Evaluation of assumptions in soil moisture triple collocation analysis

    USDA-ARS?s Scientific Manuscript database

    Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...

  17. Between-litter variation in developmental studies of hormones and behavior: Inflated false positives and diminished power.

    PubMed

    Williams, Donald R; Carlsson, Rickard; Bürkner, Paul-Christian

    2017-10-01

    Developmental studies of hormones and behavior often include littermates-rodent siblings that share early-life experiences and genes. Due to between-litter variation (i.e., litter effects), the statistical assumption of independent observations is untenable. In two literatures-natural variation in maternal care and prenatal stress-entire litters are categorized based on maternal behavior or experimental condition. Here, we (1) review both literatures; (2) simulate false positive rates for commonly used statistical methods in each literature; and (3) characterize small sample performance of multilevel models (MLM) and generalized estimating equations (GEE). We found that the assumption of independence was routinely violated (>85%), false positives (α=0.05) exceeded nominal levels (up to 0.70), and power (1-β) rarely surpassed 0.80 (even for optimistic sample and effect sizes). Additionally, we show that MLMs and GEEs have adequate performance for common research designs. We discuss implications for the extant literature, the field of behavioral neuroendocrinology, and provide recommendations. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Model specification in oral health-related quality of life research.

    PubMed

    Kieffer, Jacobien M; Verrips, Erik; Hoogstraten, Johan

    2009-10-01

    The aim of this study was to analyze conventional wisdom regarding the construction and analysis of oral health-related quality of life (OHRQoL) questionnaires and to outline statistical complications. Most methods used for developing and analyzing questionnaires, such as factor analysis and Cronbach's alpha, presume psychological constructs to be latent, inferring a reflective measurement model with the underlying assumption of local independence. Local independence implies that the latent variable explains why the variables observed are related. Many OHRQoL questionnaires are analyzed as if they were based on a reflective measurement model; local independence is thus assumed. This assumption requires these questionnaires to consist solely of items that reflect, instead of determine, OHRQoL. The tenability of this assumption is the main topic of the present study. It is argued that OHRQoL questionnaires are a mix of both a formative measurement model and a reflective measurement model, thus violating the assumption of local independence. The implications are discussed.

  19. Knowledge Discovery from Relations

    ERIC Educational Resources Information Center

    Guo, Zhen

    2010-01-01

    A basic and classical assumption in the machine learning research area is "randomness assumption" (also known as i.i.d assumption), which states that data are assumed to be independent and identically generated by some known or unknown distribution. This assumption, which is the foundation of most existing approaches in the literature, simplifies…

  20. Pilot scanning patterns while viewing cockpit displays of traffic information

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Stark, L.

    1981-01-01

    Scanning eye movements of airline pilots were recorded while they judged air traffic situations displayed on cockpit displays of traffic information (CDTI). The observed 1st order transition patterns between points of interest on the display showed reliable deviation from those patterns predicted by the assumption of statistical independence. However, both patterns of transitions correlated quite well with each other. Accordingly, the assumption of independence provided a surprisingly good model of the results. Nevertheless, the deviation between the observed patterns of transition and that based on the assumption of independence was for all subjects in the direction of increased determinism. Thus, the results provide objective evidence consistent with the existence of "scanpaths" in the data.

  1. The importance of measuring growth in response to intervention models: Testing a core assumption✩

    PubMed Central

    Schatschneider, Christopher; Wagner, Richard K.; Crawford, Elizabeth C.

    2011-01-01

    A core assumption of response to instruction or intervention (RTI) models is the importance of measuring growth in achievement over time in response to effective instruction or intervention. Many RTI models actively monitor growth for identifying individuals who need different levels of intervention. A large-scale (N=23,438), two-year longitudinal study of first grade children was carried out to compare the predictive validity of measures of achievement status, growth in achievement, and their combination for predicting future reading achievement. The results indicate that under typical conditions, measures of growth do not make a contribution to prediction that is independent of measures of achievement status. These results question the validity of a core assumption of RTI models. PMID:22224065

  2. Correlating regional natural hazards for global reinsurance risk assessment

    NASA Astrophysics Data System (ADS)

    Steptoe, Hamish; Maynard, Trevor; Economou, Theo; Fox, Helen; Wallace, Emily; Maisey, Paul

    2016-04-01

    Concurrent natural hazards represent an uncertainty in assessing exposure for the insurance industry. The recently implemented Solvency II Directive requires EU insurance companies to fully understand and justify their capital reserving and portfolio decisions. Lloyd's, the London insurance and reinsurance market, commissioned the Met Office to investigate the dependencies between different global extreme weather events (known to the industry as perils), and the mechanisms for these dependencies, with the aim of helping them assess their compound risk to the exposure of multiple simultaneous hazards. In this work, we base the analysis of hazard-to-hazard dependency on the interaction of different modes of global and regional climate variability. Lloyd's defined 16 key hazard regions, including Australian wildfires, flooding in China and EU windstorms, and we investigate the impact of 10 key climate modes on these areas. We develop a statistical model that facilitates rapid risk assessment whilst allowing for both temporal auto-correlation and, crucially, interdependencies between drivers. The simulator itself is built conditionally using autoregressive regression models for each driver conditional on the others. Whilst the baseline assumption within the (re)insurance industry is that different natural hazards are independent of each other, the assumption of independence of meteorological risks requires greater justification. Although our results suggest that most of the 120 hazard-hazard connections considered are likely to be independent of each other, 13 have significant dependence arising from one or more global modes of climate variability. This allows us to create a matrix of linkages describing the hazard dependency structure that Lloyd's can use to inform their understanding of risk.

  3. Independence of Masculine and Feminine Traits: Empirical Exploration of a Prevailing Assumption.

    ERIC Educational Resources Information Center

    Heilbrun, Alfred B., Jr.; Bailey, Becky A.

    1986-01-01

    Examination of self-descriptions of 1,623 undergraduates (collectedd in 1958-64, 1970-74, and 1977-82) failed to support the prevailing assumption that masculinity and femininity develop independently. Rather, a positive relation was found between the presence of masculine and feminine traits within the same individual even when gender and state…

  4. Transferability of Dual-Task Coordination Skills after Practice with Changing Component Tasks

    PubMed Central

    Schubert, Torsten; Liepelt, Roman; Kübler, Sebastian; Strobach, Tilo

    2017-01-01

    Recent research has demonstrated that dual-task performance with two simultaneously presented tasks can be substantially improved as a result of practice. Among other mechanisms, theories of dual-task practice-relate this improvement to the acquisition of task coordination skills. These skills are assumed (1) to result from dual-task practice, but not from single-task practice, and (2) to be independent from the specific stimulus and response mappings during the practice situation and, therefore, transferable to new dual task situations. The present study is the first that provides an elaborated test of these assumptions in a context with well-controllable practice and transfer situations. To this end, we compared the effects of dual-task and single-task practice with a visual and an auditory sensory-motor component task on the dual-task performance in a subsequent transfer session. Importantly, stimulus and stimulus-response mapping conditions in the two component tasks changed repeatedly during practice sessions, which prevents that automatized stimulus-response associations may be transferred from practice to transfer. Dual-task performance was found to be improved after practice with the dual tasks in contrast to the single-task practice. These findings are consistent with the assumption that coordination skills had been acquired, which can be transferred to other dual-task situations independently on the specific stimulus and response mapping conditions of the practiced component tasks. PMID:28659844

  5. Understanding the Elementary Steps in DNA Tile-Based Self-Assembly.

    PubMed

    Jiang, Shuoxing; Hong, Fan; Hu, Huiyu; Yan, Hao; Liu, Yan

    2017-09-26

    Although many models have been developed to guide the design and implementation of DNA tile-based self-assembly systems with increasing complexity, the fundamental assumptions of the models have not been thoroughly tested. To expand the quantitative understanding of DNA tile-based self-assembly and to test the fundamental assumptions of self-assembly models, we investigated DNA tile attachment to preformed "multi-tile" arrays in real time and obtained the thermodynamic and kinetic parameters of single tile attachment in various sticky end association scenarios. With more sticky ends, tile attachment becomes more thermostable with an approximately linear decrease in the free energy change (more negative). The total binding free energy of sticky ends is partially compromised by a sequence-independent energy penalty when tile attachment forms a constrained configuration: "loop". The minimal loop is a 2 × 2 tetramer (Loop4). The energy penalty of loops of 4, 6, and 8 tiles was analyzed with the independent loop model assuming no interloop tension, which is generalizable to arbitrary tile configurations. More sticky ends also contribute to a faster on-rate under isothermal conditions when nucleation is the rate-limiting step. Incorrect sticky end contributes to neither the thermostability nor the kinetics. The thermodynamic and kinetic parameters of DNA tile attachment elucidated here will contribute to the future improvement and optimization of tile assembly modeling, precise control of experimental conditions, and structural design for error-free self-assembly.

  6. Independent Component Analysis-motivated Approach to Classificatory Decomposition of Cortical Evoked Potentials

    PubMed Central

    Smolinski, Tomasz G; Buchanan, Roger; Boratyn, Grzegorz M; Milanova, Mariofanna; Prinz, Astrid A

    2006-01-01

    Background Independent Component Analysis (ICA) proves to be useful in the analysis of neural activity, as it allows for identification of distinct sources of activity. Applied to measurements registered in a controlled setting and under exposure to an external stimulus, it can facilitate analysis of the impact of the stimulus on those sources. The link between the stimulus and a given source can be verified by a classifier that is able to "predict" the condition a given signal was registered under, solely based on the components. However, the ICA's assumption about statistical independence of sources is often unrealistic and turns out to be insufficient to build an accurate classifier. Therefore, we propose to utilize a novel method, based on hybridization of ICA, multi-objective evolutionary algorithms (MOEA), and rough sets (RS), that attempts to improve the effectiveness of signal decomposition techniques by providing them with "classification-awareness." Results The preliminary results described here are very promising and further investigation of other MOEAs and/or RS-based classification accuracy measures should be pursued. Even a quick visual analysis of those results can provide an interesting insight into the problem of neural activity analysis. Conclusion We present a methodology of classificatory decomposition of signals. One of the main advantages of our approach is the fact that rather than solely relying on often unrealistic assumptions about statistical independence of sources, components are generated in the light of a underlying classification problem itself. PMID:17118151

  7. A modified algorithm for continuous wave near infrared spectroscopy applied to in-vivo animal experiments and on human skin

    NASA Astrophysics Data System (ADS)

    Klaessens, John H. G. M.; Hopman, Jeroen C. W.; Liem, K. Djien; de Roode, Rowland; Verdaasdonk, Rudolf M.; Thijssen, Johan M.

    2008-02-01

    Continuous wave Near Infrared Spectroscopy is a well known non invasive technique for measuring changes in tissue oxygenation. Absorption changes (ΔO2Hb and ΔHHb) are calculated from the light attenuations using the modified Lambert Beer equation. Generally, the concentration changes are calculated relative to the concentration at a starting point in time (delta time method). It is also possible, under certain assumptions, to calculate the concentrations by subtracting the equations at different wavelengths (delta wavelength method). We derived a new algorithm and will show the possibilities and limitations. In the delta wavelength method, the assumption is that the oxygen independent attenuation term will be eliminated from the formula even if its value changes in time, we verified the results with the classical delta time method using extinction coefficients from different literature sources for the wavelengths 767nm, 850nm and 905nm. The different methods of calculating concentration changes were applied to the data collected from animal experiments. The animals (lambs) were in a stable normoxic condition; stepwise they were made hypoxic and thereafter they returned to normoxic condition. The two algorithms were also applied for measuring two dimensional blood oxygen saturation changes in human skin tissue. The different oxygen saturation levels were induced by alterations in the respiration and by temporary arm clamping. The new delta wavelength method yielded in a steady state measurement the same changes in oxy and deoxy hemoglobin as the classical delta time method. The advantage of the new method is the independence of eventual variation of the oxygen independent attenuations in time.

  8. Failure to pop out: Feature singletons do not capture attention under low signal-to-noise ratio conditions.

    PubMed

    Rangelov, Dragan; Müller, Hermann J; Zehetleitner, Michael

    2017-05-01

    Pop-out search implies that the target is always the first item selected, no matter how many distractors are presented. However, increasing evidence indicates that search is not entirely independent of display density even for pop-out targets: search is slower with sparse (few distractors) than with dense displays (many distractors). Despite its significance, the cause of this anomaly remains unclear. We investigated several mechanisms that could slow down search for pop-out targets. Consistent with the assumption that pop-out targets frequently fail to pop out in sparse displays, we observed greater variability of search duration for sparse displays relative to dense. Computational modeling of the response time distributions also supported the view that pop-out targets fail to pop out in sparse displays. Our findings strongly question the classical assumption that early processing of pop-out targets is independent of the distractors. Rather, the density of distractors critically influences whether or not a stimulus pops out. These results call for new, more reliable measures of pop-out search and potentially a reinterpretation of studies that used relatively sparse displays. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Bayesian network representing system dynamics in risk analysis of nuclear systems

    NASA Astrophysics Data System (ADS)

    Varuttamaseni, Athi

    2011-12-01

    A dynamic Bayesian network (DBN) model is used in conjunction with the alternating conditional expectation (ACE) regression method to analyze the risk associated with the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed operation in the Zion-1 nuclear power plant. The use of the DBN allows the joint probability distribution to be factorized, enabling the analysis to be done on many simpler network structures rather than on one complicated structure. The construction of the DBN model assumes conditional independence relations among certain key reactor parameters. The choice of parameter to model is based on considerations of the macroscopic balance statements governing the behavior of the reactor under a quasi-static assumption. The DBN is used to relate the peak clad temperature to a set of independent variables that are known to be important in determining the success of the feed and bleed operation. A simple linear relationship is then used to relate the clad temperature to the core damage probability. To obtain a quantitative relationship among different nodes in the DBN, surrogates of the RELAP5 reactor transient analysis code are used. These surrogates are generated by applying the ACE algorithm to output data obtained from about 50 RELAP5 cases covering a wide range of the selected independent variables. These surrogates allow important safety parameters such as the fuel clad temperature to be expressed as a function of key reactor parameters such as the coolant temperature and pressure together with important independent variables such as the scram delay time. The time-dependent core damage probability is calculated by sampling the independent variables from their probability distributions and propagate the information up through the Bayesian network to give the clad temperature. With the knowledge of the clad temperature and the assumption that the core damage probability has a one-to-one relationship to it, we have calculated the core damage probably as a function of transient time. The use of the DBN model in combination with ACE allows risk analysis to be performed with much less effort than if the analysis were done using the standard techniques.

  10. Robust discovery of genetic associations incorporating gene-environment interaction and independence.

    PubMed

    Tchetgen Tchetgen, Eric

    2011-03-01

    This article considers the detection and evaluation of genetic effects incorporating gene-environment interaction and independence. Whereas ordinary logistic regression cannot exploit the assumption of gene-environment independence, the proposed approach makes explicit use of the independence assumption to improve estimation efficiency. This method, which uses both cases and controls, fits a constrained retrospective regression in which the genetic variant plays the role of the response variable, and the disease indicator and the environmental exposure are the independent variables. The regression model constrains the association of the environmental exposure with the genetic variant among the controls to be null, thus explicitly encoding the gene-environment independence assumption, which yields substantial gain in accuracy in the evaluation of genetic effects. The proposed retrospective regression approach has several advantages. It is easy to implement with standard software, and it readily accounts for multiple environmental exposures of a polytomous or of a continuous nature, while easily incorporating extraneous covariates. Unlike the profile likelihood approach of Chatterjee and Carroll (Biometrika. 2005;92:399-418), the proposed method does not require a model for the association of a polytomous or continuous exposure with the disease outcome, and, therefore, it is agnostic to the functional form of such a model and completely robust to its possible misspecification.

  11. Mixed conditional logistic regression for habitat selection studies.

    PubMed

    Duchesne, Thierry; Fortin, Daniel; Courbin, Nicolas

    2010-05-01

    1. Resource selection functions (RSFs) are becoming a dominant tool in habitat selection studies. RSF coefficients can be estimated with unconditional (standard) and conditional logistic regressions. While the advantage of mixed-effects models is recognized for standard logistic regression, mixed conditional logistic regression remains largely overlooked in ecological studies. 2. We demonstrate the significance of mixed conditional logistic regression for habitat selection studies. First, we use spatially explicit models to illustrate how mixed-effects RSFs can be useful in the presence of inter-individual heterogeneity in selection and when the assumption of independence from irrelevant alternatives (IIA) is violated. The IIA hypothesis states that the strength of preference for habitat type A over habitat type B does not depend on the other habitat types also available. Secondly, we demonstrate the significance of mixed-effects models to evaluate habitat selection of free-ranging bison Bison bison. 3. When movement rules were homogeneous among individuals and the IIA assumption was respected, fixed-effects RSFs adequately described habitat selection by simulated animals. In situations violating the inter-individual homogeneity and IIA assumptions, however, RSFs were best estimated with mixed-effects regressions, and fixed-effects models could even provide faulty conclusions. 4. Mixed-effects models indicate that bison did not select farmlands, but exhibited strong inter-individual variations in their response to farmlands. Less than half of the bison preferred farmlands over forests. Conversely, the fixed-effect model simply suggested an overall selection for farmlands. 5. Conditional logistic regression is recognized as a powerful approach to evaluate habitat selection when resource availability changes. This regression is increasingly used in ecological studies, but almost exclusively in the context of fixed-effects models. Fitness maximization can imply differences in trade-offs among individuals, which can yield inter-individual differences in selection and lead to departure from IIA. These situations are best modelled with mixed-effects models. Mixed-effects conditional logistic regression should become a valuable tool for ecological research.

  12. A simulation study to compare three self-controlled case series approaches: correction for violation of assumption and evaluation of bias.

    PubMed

    Hua, Wei; Sun, Guoying; Dodd, Caitlin N; Romio, Silvana A; Whitaker, Heather J; Izurieta, Hector S; Black, Steven; Sturkenboom, Miriam C J M; Davis, Robert L; Deceuninck, Genevieve; Andrews, N J

    2013-08-01

    The assumption that the occurrence of outcome event must not alter subsequent exposure probability is critical for preserving the validity of the self-controlled case series (SCCS) method. This assumption is violated in scenarios in which the event constitutes a contraindication for exposure. In this simulation study, we compared the performance of the standard SCCS approach and two alternative approaches when the event-independent exposure assumption was violated. Using the 2009 H1N1 and seasonal influenza vaccines and Guillain-Barré syndrome as a model, we simulated a scenario in which an individual may encounter multiple unordered exposures and each exposure may be contraindicated by the occurrence of outcome event. The degree of contraindication was varied at 0%, 50%, and 100%. The first alternative approach used only cases occurring after exposure with follow-up time starting from exposure. The second used a pseudo-likelihood method. When the event-independent exposure assumption was satisfied, the standard SCCS approach produced nearly unbiased relative incidence estimates. When this assumption was partially or completely violated, two alternative SCCS approaches could be used. While the post-exposure cases only approach could handle only one exposure, the pseudo-likelihood approach was able to correct bias for both exposures. Violation of the event-independent exposure assumption leads to an overestimation of relative incidence which could be corrected by alternative SCCS approaches. In multiple exposure situations, the pseudo-likelihood approach is optimal; the post-exposure cases only approach is limited in handling a second exposure and may introduce additional bias, thus should be used with caution. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Inherent limitations of probabilistic models for protein-DNA binding specificity

    PubMed Central

    Ruan, Shuxiang

    2017-01-01

    The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588

  14. Assessing the viability of the independent practice of dental hygiene--a brief communication.

    PubMed

    Beach, M Miles; Shulman, Jay D; Johns, Glenna; Paas, Jeffrey C

    2007-01-01

    This paper deals with the economics of the independent dental hygiene practice. Using historical data from dental practices in Cincinnati, Ohio, we developed a business model for an independent hygiene practice. We tested the sensitivity of the model to variations in key assumptions (initial capitalization, interest, employee salary, and owner's draw). We described the profitability on the basis of the breakeven point. Under the most permissive regulatory and financial environment, the practice would breakeven after 26 months. However, the owner would not equal the earnings of a salaried hygienist until the initial loan is paid off after 7 years. The model was not sensitive to 20 percent changes in the key assumptions. Under ideal circumstances, an independent hygiene practice could be profitable.

  15. A Formal Framework for the Analysis of Algorithms That Recover From Loss of Separation

    NASA Technical Reports Server (NTRS)

    Butler, RIcky W.; Munoz, Cesar A.

    2008-01-01

    We present a mathematical framework for the specification and verification of state-based conflict resolution algorithms that recover from loss of separation. In particular, we propose rigorous definitions of horizontal and vertical maneuver correctness that yield horizontal and vertical separation, respectively, in a bounded amount of time. We also provide sufficient conditions for independent correctness, i.e., separation under the assumption that only one aircraft maneuvers, and for implicitly coordinated correctness, i.e., separation under the assumption that both aircraft maneuver. An important benefit of this approach is that different aircraft can execute different algorithms and implicit coordination will still be achieved, as long as they all meet the explicit criteria of the framework. Towards this end we have sought to make the criteria as general as possible. The framework presented in this paper has been formalized and mechanically verified in the Prototype Verification System (PVS).

  16. Node-Based Learning of Multiple Gaussian Graphical Models

    PubMed Central

    Mohan, Karthik; London, Palma; Fazel, Maryam; Witten, Daniela; Lee, Su-In

    2014-01-01

    We consider the problem of estimating high-dimensional Gaussian graphical models corresponding to a single set of variables under several distinct conditions. This problem is motivated by the task of recovering transcriptional regulatory networks on the basis of gene expression data containing heterogeneous samples, such as different disease states, multiple species, or different developmental stages. We assume that most aspects of the conditional dependence networks are shared, but that there are some structured differences between them. Rather than assuming that similarities and differences between networks are driven by individual edges, we take a node-based approach, which in many cases provides a more intuitive interpretation of the network differences. We consider estimation under two distinct assumptions: (1) differences between the K networks are due to individual nodes that are perturbed across conditions, or (2) similarities among the K networks are due to the presence of common hub nodes that are shared across all K networks. Using a row-column overlap norm penalty function, we formulate two convex optimization problems that correspond to these two assumptions. We solve these problems using an alternating direction method of multipliers algorithm, and we derive a set of necessary and sufficient conditions that allows us to decompose the problem into independent subproblems so that our algorithm can be scaled to high-dimensional settings. Our proposal is illustrated on synthetic data, a webpage data set, and a brain cancer gene expression data set. PMID:25309137

  17. Maxwell and the normal distribution: A colored story of probability, independence, and tendency toward equilibrium

    NASA Astrophysics Data System (ADS)

    Gyenis, Balázs

    2017-02-01

    We investigate Maxwell's attempt to justify the mathematical assumptions behind his 1860 Proposition IV according to which the velocity components of colliding particles follow the normal distribution. Contrary to the commonly held view we find that his molecular collision model plays a crucial role in reaching this conclusion, and that his model assumptions also permit inference to equalization of mean kinetic energies (temperatures), which is what he intended to prove in his discredited and widely ignored Proposition VI. If we take a charitable reading of his own proof of Proposition VI then it was Maxwell, and not Boltzmann, who gave the first proof of a tendency towards equilibrium, a sort of H-theorem. We also call attention to a potential conflation of notions of probabilistic and value independence in relevant prior works of his contemporaries and of his own, and argue that this conflation might have impacted his adoption of the suspect independence assumption of Proposition IV.

  18. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    PubMed

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  19. Planned Diversity: A Public Policy for Independent Higher Education in North Carolina.

    ERIC Educational Resources Information Center

    North Carolina Center for Independent Higher Education, Raleigh.

    Policy assumptions, concerns, and recommendations from the leaders of independent colleges and universities are offered to contribute to the development of a public policy for higher education in North Carolina. The independent sector differs from the state sector in patterns of funding and governance. Benefits provided by the independent college…

  20. Exploring super-Gaussianity toward robust information-theoretical time delay estimation.

    PubMed

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee

    2013-03-01

    Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.

  1. A new delay-independent condition for global robust stability of neural networks with time delays.

    PubMed

    Samli, Ruya

    2015-06-01

    This paper studies the problem of robust stability of dynamical neural networks with discrete time delays under the assumptions that the network parameters of the neural system are uncertain and norm-bounded, and the activation functions are slope-bounded. By employing the results of Lyapunov stability theory and matrix theory, new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for delayed neural networks are presented. The results reported in this paper can be easily tested by checking some special properties of symmetric matrices associated with the parameter uncertainties of neural networks. We also present a numerical example to show the effectiveness of the proposed theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Change detection in rangeland environments using Landsat MSS data: a quantitative approach

    USGS Publications Warehouse

    Johnston, David C.; Haas, Robert H.

    1985-01-01

    A range forage utilization study on the Crow Creek Indian Reservation in central South Dakota provided the opportunity to use Landsat multispectral scanner (MSS) data for examining range condition trends. A procedure was developed to compare change in spectral reflectance over time for polygon areas, defined by resource type within management units. A t-test was used to evaluate changes in brightness and greenness within pastures between September 27, 1978, and September 18, 1983. The first principal component transformation from four-band MSS images for both dates was used as a measure of brightness. Greenness was measure using the second principal component transformation for both dates. Examination of the brightness date showed that the assumptions required for a valid t-test were met. The greenness data violated the assumption of independence between dates and was not used for trend comparisons. The t-values calculated from each polygon were coded into three groups: (1) those indicating significant brightness decrease, (2) those indicating significant brightness increase, and (3) those indicating no significant brightness change. Significance was determine at the 5-percent level. These results were formatted into an image, which is a preliminary product for evaluating range condition trends over a 5-year period.

  3. Inferring network structure in non-normal and mixed discrete-continuous genomic data.

    PubMed

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2018-03-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. © 2017, The International Biometric Society.

  4. Inferring network structure in non-normal and mixed discrete-continuous genomic data

    PubMed Central

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2017-01-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. PMID:28437848

  5. Visiting Filmmakers: Why Bother?

    ERIC Educational Resources Information Center

    MacDonald, Scott

    1995-01-01

    Argues that visits by independent filmmakers to campus are exciting and intellectually invigorating for students and teachers, and these visits add to the cultural energy of the college. Notes that a commitment to independent cinema challenges the assumptions and the economics of conventional cinema. Discusses how much independent filmmakers are…

  6. Simulation of Autonomic Logistics System (ALS) Sortie Generation

    DTIC Science & Technology

    2003-03-01

    84 Appendix B. ANOVA Assumptions Mission Capable Rate ANOVA Assumptions Constant Variance SSR # X cols SSE n Breusch - Pagan Chi-square 3.57E...85 Flying Scheduling Effectiveness ANOVA Assumptions Constant Variance SSR # X cols SSE n Breusch - Pagan Chi-square 2.12E-10 3 0.000816 270...Constant Variance SSR # X cols SSE n Breusch - Pagan Chi-square 1.86E-09 3 0.003758 270 3.20308814 0.9556957 Independence Durbin-Watson

  7. Design Considerations for Large Computer Communication Networks,

    DTIC Science & Technology

    1976-04-01

    particular, we will discuss the last three assumptions in order to motivate some of the models to be considered in this chapter. Independence Assumption...channels. fg Part (a), again motivated by an earlier remark on deterministic routing, will become more accurate when we include in the model, based on fixed...hierarchical routing, then this assumption appears to be quite acceptable. Part (b) is motivated by the quite symmetrical structure of the networks considered

  8. The possible modifications of the Hisse model for pure LANDSAT agricultural data

    NASA Technical Reports Server (NTRS)

    Peters, C.

    1982-01-01

    An idea, due to A. Feiveson, is presented for relaxing the assumption of class conditional independence of LANDSAT spectral measurements within the same patch (field). Theoretical arguments are given which show that any significant refinement of the model beyond Feiveson's proposal will not allow the reduction, essential to HISSE, of the pure data to patch summary statistics. A slight alteration of the new model is shown to be a reasonable approximation to the model which describes pure data elements from the same patch as jointly Guassian with a covariance function which exhibits exponential decay with respect to spatial separation.

  9. The possible modifications of the HISSE model for pure LANDSAT agricultural data

    NASA Technical Reports Server (NTRS)

    Peters, C.

    1981-01-01

    A method for relaxing the assumption of class conditional independence of LANDSAT spectral measurements within the same patch (field) is discussed. Theoretical arguments are given which show that any significant refinement of the model beyond this proposal will not allow the reduction, essential to HISSE, of the pure data to patch summary statistics. A slight alteration of the new model is shown to be a reasonable approximation to the model which describes pure data elements from the same patch as jointly Gaussian with a covariance function which exhibits exponential decay with respect to spatial separation.

  10. Inference for an Experiment based on Repeated Majority Votes.

    DTIC Science & Technology

    1985-10-15

    with the experimental setup used, if paranormal abilities do exist they may operate on the sequence as a whole, and the trials may not be independent so...assumption in the ESP experiment, since paranormal abilities, if they exist, may operate on the entire sequence as a whole. With this assumption

  11. Parasites and deleterious mutations: interactions influencing the evolutionary maintenance of sex.

    PubMed

    Park, A W; Jokela, J; Michalakis, Y

    2010-05-01

    The restrictive assumptions associated with purely genetic and purely ecological mechanisms suggest that neither of the two forces, in isolation, can offer a general explanation for the evolutionary maintenance of sex. Consequently, attention has turned to pluralistic models (i.e. models that apply both ecological and genetic mechanisms). Existing research has shown that combining mutation accumulation and parasitism allows restrictive assumptions about genetic and parasite parameter values to be relaxed while still predicting the maintenance of sex. However, several empirical studies have shown that deleterious mutations and parasitism can reduce fitness to a greater extent than would be expected if the two acted independently. We show how interactions between these genetic and ecological forces can completely reverse predictions about the evolution of reproductive modes. Moreover, we demonstrate that synergistic interactions between infection and deleterious mutations can render sex evolutionarily stable even when there is antagonistic epistasis among deleterious mutations, thereby widening the conditions for the evolutionary maintenance of sex.

  12. Estimating Causal Effects with Ancestral Graph Markov Models

    PubMed Central

    Malinsky, Daniel; Spirtes, Peter

    2017-01-01

    We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244

  13. X-Ray Thomson Scattering Without the Chihara Decomposition

    NASA Astrophysics Data System (ADS)

    Magyar, Rudolph; Baczewski, Andrew; Shulenburger, Luke; Hansen, Stephanie B.; Desjarlais, Michael P.; Sandia National Laboratories Collaboration

    X-Ray Thomson Scattering is an important experimental technique used in dynamic compression experiments to measure the properties of warm dense matter. The fundamental property probed in these experiments is the electronic dynamic structure factor that is typically modeled using an empirical three-term decomposition (Chihara, J. Phys. F, 1987). One of the crucial assumptions of this decomposition is that the system's electrons can be either classified as bound to ions or free. This decomposition may not be accurate for materials in the warm dense regime. We present unambiguous first principles calculations of the dynamic structure factor independent of the Chihara decomposition that can be used to benchmark these assumptions. Results are generated using a finite-temperature real-time time-dependent density functional theory applied for the first time in these conditions. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94AL85000.

  14. Achievement Patterns of Students in an Elite, Male Independent School

    ERIC Educational Resources Information Center

    Trusty, Edward Maurice, Jr.

    2009-01-01

    There is an underlying assumption that regardless of student ethnicity, socio-economic status, or any other variable, elite, independent schools by mission and design are effective at producing successful students. This would cause some to conclude that all students enrolled in elite, independent schools perform similarly on all academic measures.…

  15. The Impact of Multiple Endpoint Dependency on "Q" and "I"[superscript 2] in Meta-Analysis

    ERIC Educational Resources Information Center

    Thompson, Christopher Glen; Becker, Betsy Jane

    2014-01-01

    A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures "Q" and…

  16. Causal Models with Unmeasured Variables: An Introduction to LISREL.

    ERIC Educational Resources Information Center

    Wolfle, Lee M.

    Whenever one uses ordinary least squares regression, one is making an implicit assumption that all of the independent variables have been measured without error. Such an assumption is obviously unrealistic for most social data. One approach for estimating such regression models is to measure implied coefficients between latent variables for which…

  17. Is wave-particle objectivity compatible with determinism and locality?

    PubMed

    Ionicioiu, Radu; Jennewein, Thomas; Mann, Robert B; Terno, Daniel R

    2014-09-26

    Wave-particle duality, superposition and entanglement are among the most counterintuitive features of quantum theory. Their clash with our classical expectations motivated hidden-variable (HV) theories. With the emergence of quantum technologies, we can test experimentally the predictions of quantum theory versus HV theories and put strong restrictions on their key assumptions. Here, we study an entanglement-assisted version of the quantum delayed-choice experiment and show that the extension of HV to the controlling devices only exacerbates the contradiction. We compare HV theories that satisfy the conditions of objectivity (a property of photons being either particles or waves, but not both), determinism and local independence of hidden variables with quantum mechanics. Any two of the above conditions are compatible with it. The conflict becomes manifest when all three conditions are imposed and persists for any non-zero value of entanglement. We propose an experiment to test our conclusions.

  18. Is wave–particle objectivity compatible with determinism and locality?

    PubMed Central

    Ionicioiu, Radu; Jennewein, Thomas; Mann, Robert B.; Terno, Daniel R.

    2014-01-01

    Wave–particle duality, superposition and entanglement are among the most counterintuitive features of quantum theory. Their clash with our classical expectations motivated hidden-variable (HV) theories. With the emergence of quantum technologies, we can test experimentally the predictions of quantum theory versus HV theories and put strong restrictions on their key assumptions. Here, we study an entanglement-assisted version of the quantum delayed-choice experiment and show that the extension of HV to the controlling devices only exacerbates the contradiction. We compare HV theories that satisfy the conditions of objectivity (a property of photons being either particles or waves, but not both), determinism and local independence of hidden variables with quantum mechanics. Any two of the above conditions are compatible with it. The conflict becomes manifest when all three conditions are imposed and persists for any non-zero value of entanglement. We propose an experiment to test our conclusions. PMID:25256419

  19. A contemporary view of the ventricular gradient of Wilson.

    PubMed

    Plonsey, R

    1979-10-01

    We have derived quantitative expressions for QRS, T, and QRST areas of the scalar electrocardiogram. The QRST area, or ventricular gradient, is seen to be essentially independent of the activation sequence and to reflect recovery properties of the tissue as weighted by the vector lead field of a given lead. The results are derived for uniform isotropic conditions and under the assumption that the temporal waveforms everywhere are identical except for possible variations in the duration of the plateau. However, it is noted that the results are, probably, valid under anisotropic conditions as well. The examination of ventricular gradients from epicardial and intramural leads should reflect local recovery properties and be a useful tool in study of the physiology of recovery, as well as the study of arrhythmias.

  20. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    PubMed

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Enrichment analysis in high-throughput genomics - accounting for dependency in the NULL.

    PubMed

    Gold, David L; Coombes, Kevin R; Wang, Jing; Mallick, Bani

    2007-03-01

    Translating the overwhelming amount of data generated in high-throughput genomics experiments into biologically meaningful evidence, which may for example point to a series of biomarkers or hint at a relevant pathway, is a matter of great interest in bioinformatics these days. Genes showing similar experimental profiles, it is hypothesized, share biological mechanisms that if understood could provide clues to the molecular processes leading to pathological events. It is the topic of further study to learn if or how a priori information about the known genes may serve to explain coexpression. One popular method of knowledge discovery in high-throughput genomics experiments, enrichment analysis (EA), seeks to infer if an interesting collection of genes is 'enriched' for a Consortium particular set of a priori Gene Ontology Consortium (GO) classes. For the purposes of statistical testing, the conventional methods offered in EA software implicitly assume independence between the GO classes. Genes may be annotated for more than one biological classification, and therefore the resulting test statistics of enrichment between GO classes can be highly dependent if the overlapping gene sets are relatively large. There is a need to formally determine if conventional EA results are robust to the independence assumption. We derive the exact null distribution for testing enrichment of GO classes by relaxing the independence assumption using well-known statistical theory. In applications with publicly available data sets, our test results are similar to the conventional approach which assumes independence. We argue that the independence assumption is not detrimental.

  2. A dynamic model of some malaria-transmitting anopheline mosquitoes of the Afrotropical region. I. Model description and sensitivity analysis

    PubMed Central

    2013-01-01

    Background Most of the current biophysical models designed to address the large-scale distribution of malaria assume that transmission of the disease is independent of the vector involved. Another common assumption in these type of model is that the mortality rate of mosquitoes is constant over their life span and that their dispersion is negligible. Mosquito models are important in the prediction of malaria and hence there is a need for a realistic representation of the vectors involved. Results We construct a biophysical model including two competing species, Anopheles gambiae s.s. and Anopheles arabiensis. Sensitivity analysis highlight the importance of relative humidity and mosquito size, the initial conditions and dispersion, and a rarely used parameter, the probability of finding blood. We also show that the assumption of exponential mortality of adult mosquitoes does not match the observed data, and suggest that an age dimension can overcome this problem. Conclusions This study highlights some of the assumptions commonly used when constructing mosquito-malaria models and presents a realistic model of An. gambiae s.s. and An. arabiensis and their interaction. This new mosquito model, OMaWa, can improve our understanding of the dynamics of these vectors, which in turn can be used to understand the dynamics of malaria. PMID:23342980

  3. Spatial and Temporal Extrapolation of Disdrometer Size Distributions Based on a Lagrangian Trajectory Model of Falling Rain

    NASA Technical Reports Server (NTRS)

    Lane, John E.; Kasparis, Takis; Jones, W. Linwood; Metzger, Philip T.

    2009-01-01

    Methodologies to improve disdrometer processing, loosely based on mathematical techniques common to the field of particle flow and fluid mechanics, are examined and tested. The inclusion of advection and vertical wind field estimates appear to produce significantly improved results in a Lagrangian hydrometeor trajectory model, in spite of very strict assumptions of noninteracting hydrometeors, constant vertical air velocity, and time independent advection during the scan time interval. Wind field data can be extracted from each radar elevation scan by plotting and analyzing reflectivity contours over the disdrometer site and by collecting the radar radial velocity data to obtain estimates of advection. Specific regions of disdrometer spectra (drop size versus time) often exhibit strong gravitational sorting signatures, from which estimates of vertical velocity can be extracted. These independent wind field estimates become inputs and initial conditions to the Lagrangian trajectory simulation of falling hydrometeors.

  4. MODEL-INDEPENDENT LIMITS ON THE LINE-OF-SIGHT DEPTH OF CLUSTERS OF GALAXIES USING X-RAY AND SUNYAEV-ZEL'DOVICH DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahdavi, Andisheh; Chang Weihan

    2011-07-01

    We derive a model-independent expression for the minimum line-of-sight extent of the hot plasma in a cluster of galaxies. The only inputs are the 1-5 keV X-ray surface brightness and the Comptonization from Sunyaev-Zel'dovich (SZ) data. No a priori assumptions regarding equilibrium or geometry are required. The method applies when the X-ray emitting material has temperatures anywhere between 0.3 keV and 20 keV and metallicities between 0 and twice solar-conditions fulfilled by nearly all intracluster plasma. Using this method, joint APEX-SZ and Chandra X-ray Observatory data on the Bullet Cluster yield a lower limit of 400 {+-} 56 kpc onmore » the half-pressure depth of the main component, limiting it to being at least spherical, if not cigar-shaped primarily along the line of sight.« less

  5. YORP torques with 1D thermal model

    NASA Astrophysics Data System (ADS)

    Breiter, S.; Bartczak, P.; Czekaj, M.

    2010-11-01

    A numerical model of the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect for objects defined in terms of a triangular mesh is described. The algorithm requires that each surface triangle can be handled independently, which implies the use of a 1D thermal model. Insolation of each triangle is determined by an optimized ray-triangle intersection search. Surface temperature is modelled with a spectral approach; imposing a quasi-periodic solution we replace heat conduction equation by the Helmholtz equation. Non-linear boundary conditions are handled by an iterative, fast Fourier transform based solver. The results resolve the question of the YORP effect in rotation rate independence on conductivity within the non-linear 1D thermal model regardless of the accuracy issues and homogeneity assumptions. A seasonal YORP effect in attitude is revealed for objects moving on elliptic orbits when a non-linear thermal model is used.

  6. Uncertainty quantification for accident management using ACE surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varuttamaseni, A.; Lee, J. C.; Youngblood, R. W.

    The alternating conditional expectation (ACE) regression method is used to generate RELAP5 surrogates which are then used to determine the distribution of the peak clad temperature (PCT) during the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed (F and B) operation in the Zion-1 nuclear power plant. The construction of the surrogates assumes conditional independence relations among key reactor parameters. The choice of parameters to model is based on the macroscopic balance statements governing the behavior of the reactor. The peak clad temperature is calculated based on the independent variables that are known tomore » be important in determining the success of the F and B operation. The relationship between these independent variables and the plant parameters such as coolant pressure and temperature is represented by surrogates that are constructed based on 45 RELAP5 cases. The time-dependent PCT for different values of F and B parameters is calculated by sampling the independent variables from their probability distributions and propagating the information through two layers of surrogates. The results of our analysis show that the ACE surrogates are able to satisfactorily reproduce the behavior of the plant parameters even though a quasi-static assumption is primarily used in their construction. The PCT is found to be lower in cases where the F and B operation is initiated, compared to the case without F and B, regardless of the F and B parameters used. (authors)« less

  7. Goodness-of-fit tests for open capture-recapture models

    USGS Publications Warehouse

    Pollock, K.H.; Hines, J.E.; Nichols, J.D.

    1985-01-01

    General goodness-of-fit tests for the Jolly-Seber model are proposed. These tests are based on conditional arguments using minimal sufficient statistics. The tests are shown to be of simple hypergeometric form so that a series of independent contingency table chi-square tests can be performed. The relationship of these tests to other proposed tests is discussed. This is followed by a simulation study of the power of the tests to detect departures from the assumptions of the Jolly-Seber model. Some meadow vole capture-recapture data are used to illustrate the testing procedure which has been implemented in a computer program available from the authors.

  8. Regression mixture models: Does modeling the covariance between independent variables and latent classes improve the results?

    PubMed Central

    Lamont, Andrea E.; Vermunt, Jeroen K.; Van Horn, M. Lee

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we test the effects of violating an implicit assumption often made in these models – i.e., independent variables in the model are not directly related to latent classes. Results indicated that the major risk of failing to model the relationship between predictor and latent class was an increase in the probability of selecting additional latent classes and biased class proportions. Additionally, this study tests whether regression mixture models can detect a piecewise relationship between a predictor and outcome. Results suggest that these models are able to detect piecewise relations, but only when the relationship between the latent class and the predictor is included in model estimation. We illustrate the implications of making this assumption through a re-analysis of applied data examining heterogeneity in the effects of family resources on academic achievement. We compare previous results (which assumed no relation between independent variables and latent class) to the model where this assumption is lifted. Implications and analytic suggestions for conducting regression mixture based on these findings are noted. PMID:26881956

  9. Model-independent curvature determination with 21 cm intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Witzemann, Amadeus; Bull, Philip; Clarkson, Chris; Santos, Mario G.; Spinelli, Marta; Weltman, Amanda

    2018-06-01

    Measurements of the spatial curvature of the Universe have improved significantly in recent years, but still tend to require strong assumptions to be made about the equation of state of dark energy (DE) in order to reach sub-percent precision. When these assumptions are relaxed, strong degeneracies arise that make it hard to disentangle DE and curvature, degrading the constraints. We show that forthcoming 21 cm intensity mapping experiments such as Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) are ideally designed to carry out model-independent curvature measurements, as they can measure the clustering signal at high redshift with sufficient precision to break many of the degeneracies. We consider two different model-independent methods, based on `avoiding' the DE-dominated regime and non-parametric modelling of the DE equation of state, respectively. Our forecasts show that HIRAX will be able to improve upon current model-independent constraints by around an order of magnitude, reaching percent-level accuracy even when an arbitrary DE equation of state is assumed. In the same model-independent analysis, the sample variance limit for a similar survey is another order of magnitude better.

  10. Model-independent curvature determination with 21cm intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Witzemann, Amadeus; Bull, Philip; Clarkson, Chris; Santos, Mario G.; Spinelli, Marta; Weltman, Amanda

    2018-04-01

    Measurements of the spatial curvature of the Universe have improved significantly in recent years, but still tend to require strong assumptions to be made about the equation of state of dark energy (DE) in order to reach sub-percent precision. When these assumptions are relaxed, strong degeneracies arise that make it hard to disentangle DE and curvature, degrading the constraints. We show that forthcoming 21cm intensity mapping experiments such as HIRAX are ideally designed to carry out model-independent curvature measurements, as they can measure the clustering signal at high redshift with sufficient precision to break many of the degeneracies. We consider two different model-independent methods, based on `avoiding' the DE-dominated regime and non-parametric modelling of the DE equation of state respectively. Our forecasts show that HIRAX will be able to improve upon current model-independent constraints by around an order of magnitude, reaching percent-level accuracy even when an arbitrary DE equation of state is assumed. In the same model-independent analysis, the sample variance limit for a similar survey is another order of magnitude better.

  11. Aggressiveness as a latent personality trait of domestic dogs: Testing local independence and measurement invariance

    PubMed Central

    2017-01-01

    Studies of animal personality attempt to uncover underlying or “latent” personality traits that explain broad patterns of behaviour, often by applying latent variable statistical models (e.g., factor analysis) to multivariate data sets. Two integral, but infrequently confirmed, assumptions of latent variable models in animal personality are: i) behavioural variables are independent (i.e., uncorrelated) conditional on the latent personality traits they reflect (local independence), and ii) personality traits are associated with behavioural variables in the same way across individuals or groups of individuals (measurement invariance). We tested these assumptions using observations of aggression in four age classes (4–10 months, 10 months–3 years, 3–6 years, over 6 years) of male and female shelter dogs (N = 4,743) in 11 different contexts. A structural equation model supported the hypothesis of two positively correlated personality traits underlying aggression across contexts: aggressiveness towards people and aggressiveness towards dogs (comparative fit index: 0.96; Tucker-Lewis index: 0.95; root mean square error of approximation: 0.03). Aggression across contexts was moderately repeatable (towards people: intraclass correlation coefficient (ICC) = 0.479; towards dogs: ICC = 0.303). However, certain contexts related to aggressiveness towards people (but not dogs) shared significant residual relationships unaccounted for by latent levels of aggressiveness. Furthermore, aggressiveness towards people and dogs in different contexts interacted with sex and age. Thus, sex and age differences in displays of aggression were not simple functions of underlying aggressiveness. Our results illustrate that the robustness of traits in latent variable models must be critically assessed before making conclusions about the effects of, or factors influencing, animal personality. Our findings are of concern because inaccurate “aggressive personality” trait attributions can be costly to dogs, recipients of aggression and society in general. PMID:28854267

  12. Aggressiveness as a latent personality trait of domestic dogs: Testing local independence and measurement invariance.

    PubMed

    Goold, Conor; Newberry, Ruth C

    2017-01-01

    Studies of animal personality attempt to uncover underlying or "latent" personality traits that explain broad patterns of behaviour, often by applying latent variable statistical models (e.g., factor analysis) to multivariate data sets. Two integral, but infrequently confirmed, assumptions of latent variable models in animal personality are: i) behavioural variables are independent (i.e., uncorrelated) conditional on the latent personality traits they reflect (local independence), and ii) personality traits are associated with behavioural variables in the same way across individuals or groups of individuals (measurement invariance). We tested these assumptions using observations of aggression in four age classes (4-10 months, 10 months-3 years, 3-6 years, over 6 years) of male and female shelter dogs (N = 4,743) in 11 different contexts. A structural equation model supported the hypothesis of two positively correlated personality traits underlying aggression across contexts: aggressiveness towards people and aggressiveness towards dogs (comparative fit index: 0.96; Tucker-Lewis index: 0.95; root mean square error of approximation: 0.03). Aggression across contexts was moderately repeatable (towards people: intraclass correlation coefficient (ICC) = 0.479; towards dogs: ICC = 0.303). However, certain contexts related to aggressiveness towards people (but not dogs) shared significant residual relationships unaccounted for by latent levels of aggressiveness. Furthermore, aggressiveness towards people and dogs in different contexts interacted with sex and age. Thus, sex and age differences in displays of aggression were not simple functions of underlying aggressiveness. Our results illustrate that the robustness of traits in latent variable models must be critically assessed before making conclusions about the effects of, or factors influencing, animal personality. Our findings are of concern because inaccurate "aggressive personality" trait attributions can be costly to dogs, recipients of aggression and society in general.

  13. Maternal condition and previous reproduction interact to affect offspring sex in a wild mammal.

    PubMed

    Douhard, Mathieu; Festa-Bianchet, Marco; Pelletier, Fanie

    2016-08-01

    Trivers and Willard proposed that offspring sex ratio should vary with maternal condition when condition, meant as maternal capacity to care, has different fitness consequences for sons and daughters. In polygynous and dimorphic species, mothers in good condition should preferentially produce sons, whereas mothers in poor condition should produce more daughters. Despite its logical appeal, support for this hypothesis has been inconsistent. Sex-ratio variation may be influenced by additional factors, such as environmental conditions and previous reproduction, which are often ignored in empirical studies. We analysed 39 years of data on bighorn sheep (Ovis canadensis) that fit all the assumptions of the Trivers-Willard hypothesis. Production of sons increased with maternal condition only for mothers that weaned a son the previous year. This relationship likely reflects a mother's ability to bear the higher reproductive costs of sons. The interaction between maternal condition and previous weaning success on the probability of producing a son was independent of the positive effect of paternal reproductive success. Maternal and paternal effects accounted for similar proportions of the variance in offspring sex. Maternal reproductive history should be considered in addition to current condition in studies of sex allocation. © 2016 The Author(s).

  14. The dissociable effects of punishment and reward on motor learning.

    PubMed

    Galea, Joseph M; Mallia, Elizabeth; Rothwell, John; Diedrichsen, Jörn

    2015-04-01

    A common assumption regarding error-based motor learning (motor adaptation) in humans is that its underlying mechanism is automatic and insensitive to reward- or punishment-based feedback. Contrary to this hypothesis, we show in a double dissociation that the two have independent effects on the learning and retention components of motor adaptation. Negative feedback, whether graded or binary, accelerated learning. While it was not necessary for the negative feedback to be coupled to monetary loss, it had to be clearly related to the actual performance on the preceding movement. Positive feedback did not speed up learning, but it increased retention of the motor memory when performance feedback was withdrawn. These findings reinforce the view that independent mechanisms underpin learning and retention in motor adaptation, reject the assumption that motor adaptation is independent of motivational feedback, and raise new questions regarding the neural basis of negative and positive motivational feedback in motor learning.

  15. An experiment in software reliability: Additional analyses using data from automated replications

    NASA Technical Reports Server (NTRS)

    Dunham, Janet R.; Lauterbach, Linda A.

    1988-01-01

    A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.

  16. Beyond Positive Psychology?

    PubMed Central

    McNulty, James K.; Fincham, Frank D.

    2014-01-01

    The field of positive psychology rests on the assumption that certain psychological traits and processes are inherently beneficial for well-being. We review evidence that challenges this assumption. First, we review data from 4 independent longitudinal studies of marriage revealing that 4 ostensibly positive processes—forgiveness, optimistic expectations, positive thoughts, and kindness—can either benefit or harm well-being depending on the context in which they operate. Although all 4 processes predicted better relationship well-being among spouses in healthy marriages, they predicted worse relationship well-being in more troubled marriages. Then, we review evidence from other research that reveals that whether ostensibly positive psychological traits and processes benefit or harm well-being depends on the context of various noninterpersonal domains as well. Finally, we conclude by arguing that any movement to promote well-being may be most successful to the extent that it (a) examines the conditions under which the same traits and processes may promote versus threaten well-being, (b) examines both healthy and unhealthy people, (c) examines well-being over substantial periods of time, and (d) avoids labeling psychological traits and processes as positive or negative. PMID:21787036

  17. One hundred years of return period: Strengths and limitations

    NASA Astrophysics Data System (ADS)

    Volpi, E.; Fiori, A.; Grimaldi, S.; Lombardo, F.; Koutsoyiannis, D.

    2015-10-01

    One hundred years from its original definition by Fuller, the probabilistic concept of return period is widely used in hydrology as well as in other disciplines of geosciences to give an indication on critical event rareness. This concept gains its popularity, especially in engineering practice for design and risk assessment, due to its ease of use and understanding; however, return period relies on some basic assumptions that should be satisfied for a correct application of this statistical tool. Indeed, conventional frequency analysis in hydrology is performed by assuming as necessary conditions that extreme events arise from a stationary distribution and are independent of one another. The main objective of this paper is to investigate the properties of return period when the independence condition is omitted; hence, we explore how the different definitions of return period available in literature affect results of frequency analysis for processes correlated in time. We demonstrate that, for stationary processes, the independence condition is not necessary in order to apply the classical equation of return period (i.e., the inverse of exceedance probability). On the other hand, we show that the time-correlation structure of hydrological processes modifies the shape of the distribution function of which the return period represents the first moment. This implies that, in the context of time-dependent processes, the return period might not represent an exhaustive measure of the probability of failure, and that its blind application could lead to misleading results. To overcome this problem, we introduce the concept of Equivalent Return Period, which controls the probability of failure still preserving the virtue of effectively communicating the event rareness.

  18. Technology Advancement and the CTBT: Taking One Step Back from the Nuclear Brink

    NASA Astrophysics Data System (ADS)

    Perry, W. J.

    2016-12-01

    Technology plays a pivotal role in international nuclear security and technological advancement continues to support a path toward stability. One near-term and readily-obtainable step back from the nuclear brink is the Comprehensive Nuclear-test Ban Treaty (CTBT). The technology to independently verify adherence to the CTBT has matured in the 20 years since the Treaty was opened for signature. Technology has also improved the safety and reliability of the US nuclear stockpile in the absence of testing. Due to these advances over the past two decades neither verification nor stockpiles effectiveness should be an impediment to the Treaty's entry into force. Other technical and geo-political evolution in this same period has changed the perceived benefit of nuclear weapons as instruments of security. Recognizing the change technology has brought to deliberation of nuclear security, nations are encouraged to take this one step away from instability.This presentation will reflect on the history and assumptions that have been used to justify the build-up and configuration of nuclear stockpiles, the changes in technology and conditions that alter the basis of these original assumptions, and the re-analysis of security using current and future assumptions that point to the need for revised nuclear policies. The author has a unique and well informed perspective as both the most senior US Defense Official and a technologist.

  19. Application of stochastic differential geometry to the term structure of interst rates in developed markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taranenko, Y.; Barnes, C.

    1996-12-31

    This paper deals with further developments of the new theory that applies stochastic differential geometry (SDG) to dynamics of interest rates. We examine mathematical constraints on the evolution of interest rate volatilities that arise from stochastic differential calculus under assumptions of an arbitrage free evolution of zero coupon bonds and developed markets (i.e., none of the party/factor can drive the whole market). The resulting new theory incorporates the Heath-Jarrow-Morton (HJM) model of interest rates and provides new equations for volatilities which makes the system of equations for interest rates and volatilities complete and self consistent. It results in much smallermore » amount of volatility data that should be guessed for the SDG model as compared to the HJM model. Limited analysis of the market volatility data suggests that the assumption of the developed market is violated around maturity of two years. Such maturities where the assumptions of the SDG model are violated are suggested to serve as boundaries at which volatilities should be specified independently from the model. Our numerical example with two boundaries (two years and five years) qualitatively resembles the market behavior. Under some conditions solutions of the SDG model become singular that may indicate market crashes. More detail comparison with the data is needed before the theory can be established or refuted.« less

  20. Relationship between the Uncompensated Price Elasticity and the Income Elasticity of Demand under Conditions of Additive Preferences

    PubMed Central

    Sabatelli, Lorenzo

    2016-01-01

    Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis. Key parameters are: the elasticity of the marginal utility of income, and the average budget share. The proposed method can be used to forecast the direct and indirect impact of price changes and of financial instruments of policy using available estimates of the income elasticity of demand. PMID:26999511

  1. Relationship between the Uncompensated Price Elasticity and the Income Elasticity of Demand under Conditions of Additive Preferences.

    PubMed

    Sabatelli, Lorenzo

    2016-01-01

    Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis. Key parameters are: the elasticity of the marginal utility of income, and the average budget share. The proposed method can be used to forecast the direct and indirect impact of price changes and of financial instruments of policy using available estimates of the income elasticity of demand.

  2. Transient response to three-phase faults on a wind turbine generator. Ph.D. Thesis - Toledo Univ.

    NASA Technical Reports Server (NTRS)

    Gilbert, L. J.

    1978-01-01

    In order to obtain a measure of its responses to short circuits a large horizontal axis wind turbine generator was modeled and its performance was simulated on a digital computer. Simulation of short circuit faults on the synchronous alternator of a wind turbine generator, without resort to the classical assumptions generally made for that analysis, indicates that maximum clearing times for the system tied to an infinite bus are longer than the typical clearing times for equivalent capacity conventional machines. Also, maximum clearing times are independent of tower shadow and wind shear. Variation of circuit conditions produce the modifications in the transient response predicted by analysis.

  3. Multivariate Bayesian analysis of Gaussian, right censored Gaussian, ordered categorical and binary traits using Gibbs sampling

    PubMed Central

    Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just

    2003-01-01

    A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531

  4. Halo-independent direct detection analyses without mass assumptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Adam J.; Fox, Patrick J.; Kahn, Yonatan

    2015-10-01

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the m{sub χ}−σ{sub n} plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the v{sub min}− g-tilde plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from v{sub min} to nuclear recoil momentummore » (p{sub R}), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call h-tilde (p{sub R}). The entire family of conventional halo-independent g-tilde (v{sub min}) plots for all DM masses are directly found from the single h-tilde (p{sub R}) plot through a simple rescaling of axes. By considering results in h-tilde (p{sub R}) space, one can determine if two experiments are inconsistent for all masses and all physically possible halos, or for what range of dark matter masses the results are inconsistent for all halos, without the necessity of multiple g-tilde (v{sub min}) plots for different DM masses. We conduct a sample analysis comparing the CDMS II Si events to the null results from LUX, XENON10, and SuperCDMS using our method and discuss how the results can be strengthened by imposing the physically reasonable requirement of a finite halo escape velocity.« less

  5. Halo-independent direct detection analyses without mass assumptions

    DOE PAGES

    Anderson, Adam J.; Fox, Patrick J.; Kahn, Yonatan; ...

    2015-10-06

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the m χ – σ n plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the v min – g ~ plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from v minmore » to nuclear recoil momentum (p R), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call tilde h(p R). The entire family of conventional halo-independent tilde g ~(v min) plots for all DM masses are directly found from the single tilde h ~(p R) plot through a simple rescaling of axes. By considering results in tildeh ~(p R) space, one can determine if two experiments are inconsistent for all masses and all physically possible halos, or for what range of dark matter masses the results are inconsistent for all halos, without the necessity of multiple tilde g ~(v min) plots for different DM masses. As a result, we conduct a sample analysis comparing the CDMS II Si events to the null results from LUX, XENON10, and SuperCDMS using our method and discuss how the results can be strengthened by imposing the physically reasonable requirement of a finite halo escape velocity.« less

  6. Separating intrinsic from extrinsic fluctuations in dynamic biological systems

    PubMed Central

    Paulsson, Johan

    2011-01-01

    From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems. PMID:21730172

  7. Separating intrinsic from extrinsic fluctuations in dynamic biological systems.

    PubMed

    Hilfinger, Andreas; Paulsson, Johan

    2011-07-19

    From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems.

  8. Quantifying Square Membrane Wrinkle Behavior Using MITC Shell Elements

    NASA Technical Reports Server (NTRS)

    Jacobson, Mindy B.; Iwasa, Takashi; Natori, M. C.

    2004-01-01

    For future membrane based structures, quantified predictions of membrane wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made while using finite elements. Specifically, this work demonstrates that critical assumptions include: effects of gravity. supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 square meter membrane is treated as a structural material with non-negligible bending stiffness. Mixed Interpolation of Tensorial Components (MTTC) shell elements are used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density for cases with differing initial conditions are independent of assumed initial con&tions. In addition, analysis results indicate that the relationship between amplitude scale (W/t) and structural scale (L/t) is linear in the presence of a gravity field.

  9. Replicates in high dimensions, with applications to latent variable graphical models.

    PubMed

    Tan, Kean Ming; Ning, Yang; Witten, Daniela M; Liu, Han

    2016-12-01

    In classical statistics, much thought has been put into experimental design and data collection. In the high-dimensional setting, however, experimental design has been less of a focus. In this paper, we stress the importance of collecting multiple replicates for each subject in this setting. We consider learning the structure of a graphical model with latent variables, under the assumption that these variables take a constant value across replicates within each subject. By collecting multiple replicates for each subject, we are able to estimate the conditional dependence relationships among the observed variables given the latent variables. To test the null hypothesis of conditional independence between two observed variables, we propose a pairwise decorrelated score test. Theoretical guarantees are established for parameter estimation and for this test. We show that our proposal is able to estimate latent variable graphical models more accurately than some existing proposals, and apply the proposed method to a brain imaging dataset.

  10. Performance on perceptual word identification is mediated by discrete states.

    PubMed

    Swagman, April R; Province, Jordan M; Rouder, Jeffrey N

    2015-02-01

    We contrast predictions from discrete-state models of all-or-none information loss with signal-detection models of graded strength for the identification of briefly flashed English words. Previous assessments have focused on whether ROC curves are straight or not, which is a test of a discrete-state model where detection leads to the highest confidence response with certainty. We along with many others argue this certainty assumption is too constraining, and, consequently, the straight-line ROC test is too stringent. Instead, we assess a core property of discrete-state models, conditional independence, where the pattern of responses depends only on which state is entered. The conditional independence property implies that confidence ratings are a mixture of detect and guess state responses, and that stimulus strength factors, the duration of the flashed word in this report, affect only the probability of entering a state and not responses conditional on a state. To assess this mixture property, 50 participants saw words presented briefly on a computer screen at three variable flash durations followed by either a two-alternative confidence ratings task or a yes-no confidence ratings task. Comparable discrete-state and signal-detection models were fit to the data for each participant and task. The discrete-state models outperformed the signal detection models for 90 % of participants in the two-alternative task and for 68 % of participants in the yes-no task. We conclude discrete-state models are viable for predicting performance across stimulus conditions in a perceptual word identification task.

  11. On the Relationship between Memory and Perception: Sequential Dependencies in Recognition Memory Testing

    ERIC Educational Resources Information Center

    Malmberg, Kenneth J.; Annis, Jeffrey

    2012-01-01

    Many models of recognition are derived from models originally applied to perception tasks, which assume that decisions from trial to trial are independent. While the independence assumption is violated for many perception tasks, we present the results of several experiments intended to relate memory and perception by exploring sequential…

  12. A Laboratory Study on the Reliability Estimations of the Mini-CEX

    ERIC Educational Resources Information Center

    de Lima, Alberto Alves; Conde, Diego; Costabel, Juan; Corso, Juan; Van der Vleuten, Cees

    2013-01-01

    Reliability estimations of workplace-based assessments with the mini-CEX are typically based on real-life data. Estimations are based on the assumption of local independence: the object of the measurement should not be influenced by the measurement itself and samples should be completely independent. This is difficult to achieve. Furthermore, the…

  13. An Extension of IRT-Based Equating to the Dichotomous Testlet Response Theory Model

    ERIC Educational Resources Information Center

    Tao, Wei; Cao, Yi

    2016-01-01

    Current procedures for equating number-correct scores using traditional item response theory (IRT) methods assume local independence. However, when tests are constructed using testlets, one concern is the violation of the local item independence assumption. The testlet response theory (TRT) model is one way to accommodate local item dependence.…

  14. Variants of Independence in the Perception of Facial Identity and Expression

    ERIC Educational Resources Information Center

    Fitousi, Daniel; Wenger, Michael J.

    2013-01-01

    A prominent theory in the face perception literature--the parallel-route hypothesis (Bruce & Young, 1986)--assumes a dedicated channel for the processing of identity that is separate and independent from the channel(s) in which nonidentity information is processed (e.g., expression, eye gaze). The current work subjected this assumption to…

  15. On the joint bimodality of temperature and moisture near stratocumulus cloud tops

    NASA Technical Reports Server (NTRS)

    Randall, D. A.

    1983-01-01

    The observed distributions of the thermodynamic variables near stratocumulus top are highly bimodal. Two simple models of sub-grid fractional cloudiness motivated by this observed bimodality are examined. In both models, certain low order moments of two independent, moist-conservative thermodynamic variables are assumed to be known. The first model is based on the assumption of two discrete populations of parcels: a warm-day population and a cool-moist population. If only the first and second moments are assumed to be known, the number of unknowns exceeds the number of independent equations. If the third moments are assumed to be known as well, the number of independent equations exceeds the number of unknowns. The second model is based on the assumption of a continuous joint bimodal distribution of parcels, obtained as the weighted sum of two binormal distributions. For this model, the third moments are used to obtain 9 independent nonlinear algebraic equations in 11 unknowns. Two additional equations are needed to determine the covariance within the two subpopulations. In case these two internal covariance vanish, the system of equations can be solved analytically.

  16. Notes on SAW Tag Interrogation Techniques

    NASA Technical Reports Server (NTRS)

    Barton, Richard J.

    2010-01-01

    We consider the problem of interrogating a single SAW RFID tag with a known ID and known range in the presence of multiple interfering tags under the following assumptions: (1) The RF propagation environment is well approximated as a simple delay channel with geometric power-decay constant alpha >/= 2. (2) The interfering tag IDs are unknown but well approximated as independent, identically distributed random samples from a probability distribution of tag ID waveforms with known second-order properties, and the tag of interest is drawn independently from the same distribution. (3) The ranges of the interfering tags are unknown but well approximated as independent, identically distributed realizations of a random variable rho with a known probability distribution f(sub rho) , and the tag ranges are independent of the tag ID waveforms. In particular, we model the tag waveforms as random impulse responses from a wide-sense-stationary, uncorrelated-scattering (WSSUS) fading channel with known bandwidth and scattering function. A brief discussion of the properties of such channels and the notation used to describe them in this document is given in the Appendix. Under these assumptions, we derive the expression for the output signal-to-noise ratio (SNR) for an arbitrary combination of transmitted interrogation signal and linear receiver filter. Based on this expression, we derive the optimal interrogator configuration (i.e., transmitted signal/receiver filter combination) in the two extreme noise/interference regimes, i.e., noise-limited and interference-limited, under the additional assumption that the coherence bandwidth of the tags is much smaller than the total tag bandwidth. Finally, we evaluate the performance of both optimal interrogators over a broad range of operating scenarios using both numerical simulation based on the assumed model and Monte Carlo simulation based on a small sample of measured tag waveforms. The performance evaluation results not only provide guidelines for proper interrogator design, but also provide some insight on the validity of the assumed signal model. It should be noted that the assumption that the impulse response of the tag of interest is known precisely implies that the temperature and range of the tag are also known precisely, which is generally not the case in practice. However, analyzing interrogator performance under this simplifying assumption is much more straightforward and still provides a great deal of insight into the nature of the problem.

  17. Improving Global Models of Remotely Sensed Ocean Chlorophyll Content Using Partial Least Squares and Geographically Weighted Regression

    NASA Astrophysics Data System (ADS)

    Gholizadeh, H.; Robeson, S. M.

    2015-12-01

    Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.

  18. Slipping Anchor? Testing the Vignettes Approach to Identification and Correction of Reporting Heterogeneity

    PubMed Central

    d’Uva, Teresa Bago; Lindeboom, Maarten; O’Donnell, Owen; van Doorslaer, Eddy

    2011-01-01

    We propose tests of the two assumptions under which anchoring vignettes identify heterogeneity in reporting of categorical evaluations. Systematic variation in the perceived difference between any two vignette states is sufficient to reject vignette equivalence. Response consistency - the respondent uses the same response scale to evaluate the vignette and herself – is testable given sufficiently comprehensive objective indicators that independently identify response scales. Both assumptions are rejected for reporting of cognitive and physical functioning in a sample of older English individuals, although a weaker test resting on less stringent assumptions does not reject response consistency for cognition. PMID:22184479

  19. Thresholds of understanding: Exploring assumptions of scale invariance vs. scale dependence in global biogeochemical models

    NASA Astrophysics Data System (ADS)

    Wieder, W. R.; Bradford, M.; Koven, C.; Talbot, J. M.; Wood, S.; Chadwick, O.

    2016-12-01

    High uncertainty and low confidence in terrestrial carbon (C) cycle projections reflect the incomplete understanding of how best to represent biologically-driven C cycle processes at global scales. Ecosystem theories, and consequently biogeochemical models, are based on the assumption that different belowground communities function similarly and interact with the abiotic environment in consistent ways. This assumption of "Scale Invariance" posits that environmental conditions will change the rate of ecosystem processes, but the biotic response will be consistent across sites. Indeed, cross-site comparisons and global-scale analyses suggest that climate strongly controls rates of litter mass loss and soil organic matter turnover. Alternatively, activities of belowground communities are shaped by particular local environmental conditions, such as climate and edaphic conditions. Under this assumption of "Scale Dependence", relationships generated by evolutionary trade-offs in acquiring resources and withstanding environmental stress dictate the activities of belowground communities and their functional response to environmental change. Similarly, local edaphic conditions (e.g. permafrost soils or reactive minerals that physicochemically stabilize soil organic matter on mineral surfaces) may strongly constrain the availability of substrates that biota decompose—altering the trajectory of soil biogeochemical response to perturbations. Identifying when scale invariant assumptions hold vs. where local variation in biotic communities or edaphic conditions must be considered is critical to advancing our understanding and representation of belowground processes in the face of environmental change. Here we introduce data sets that support assumptions of scale invariance and scale dependent processes and discuss their application in global-scale biogeochemical models. We identify particular domains over which assumptions of scale invariance may be appropriate and potential thresholds where shifts in ecosystem function may be expected. Finally, we discuss the mechanistic insight that can be applied in process-based models and datasets that can evaluate models across spatial and temporal scales.

  20. Global existence of strong solutions to the three- dimensional incompressible Navier-Stokes equations with special boundary conditions

    NASA Astrophysics Data System (ADS)

    Riley, Douglas A.

    We study the three-dimensional incompressible Navier- Stokes equations in a domain of the form W'×(0,e) . First, we assume W' is a C3 bounded domain and impose no-slip boundary conditions on 6W'×(0,e ) , and periodic conditions on W'×0,e . Physically, this models fluid flow through a pipe with cross-section W' where the inlet and outlet conditions are assumed periodic. Secondly, we assume W'=(0,l4) ×(0,l5) and impose periodic boundary conditions. This problem is of interest mathematically, and has been more widely considered than the pipe flow problem. For both sets of boundary conditions, we show that a strong solution exists for all time with conditions on the initial data and forcing. We start by recalling that if the forcing function and initial condition do not depend on x3, then a global strong solution exists which also does not depend on x3. Here (x1,x2,x3) ∈W≡W'×( 0,e) . With this observation as motivation, and using an additive decomposition introduced by Raugel and Sell, we split the initial data and forcing into a portion independent of x3 and a remainder. In our first result, we impose a smallness condition on the remainder and assume the forcing function is square- integrable in time as a function into L2(W) . With these assumptions, we prove a global existence theorem that does not require a smallness condition on e or on the portion of the initial condition and forcing independent of x3. However, these quantities do affect the allowable size of the remainder. For our second result, we assume the forcing is only bounded in time as a function into L2(W) . In this case, we need a smallness condition on the initial data, the forcing, and e to obtain global existence. The interesting observation is that the allowable sizes for the initial data and forcing grow as e-->0 . Thus, we obtain a `thin-domain' result as originally obtained by Raugel and Sell. In fact, our results allow the portion of the initial data and forcing independent of x3 to grow at a faster rate as e-->0 than previously obtained.

  1. MCMC Sampling for a Multilevel Model with Nonindependent Residuals within and between Cluster Units

    ERIC Educational Resources Information Center

    Browne, William; Goldstein, Harvey

    2010-01-01

    In this article, we discuss the effect of removing the independence assumptions between the residuals in two-level random effect models. We first consider removing the independence between the Level 2 residuals and instead assume that the vector of all residuals at the cluster level follows a general multivariate normal distribution. We…

  2. Naming Problems Do Not Reflect a Second Independent Core Deficit in Dyslexia: Double Deficits Explored

    ERIC Educational Resources Information Center

    Vaessen, Anniek; Gerretsen, Patty; Blomert, Leo

    2009-01-01

    The double deficit hypothesis states that naming speed problems represent a second core deficit in dyslexia independent from a phonological deficit. The current study investigated the main assumptions of this hypothesis in a large sample of well-diagnosed dyslexics. The three main findings were that (a) naming speed was consistently related only…

  3. A Neural Mechanism for Nonconscious Activation of Conditioned Placebo and Nocebo Responses.

    PubMed

    Jensen, Karin B; Kaptchuk, Ted J; Chen, Xiaoyan; Kirsch, Irving; Ingvar, Martin; Gollub, Randy L; Kong, Jian

    2015-10-01

    Fundamental aspects of human behavior operate outside of conscious awareness. Yet, theories of conditioned responses in humans, such as placebo and nocebo effects on pain, have a strong emphasis on conscious recognition of contextual cues that trigger the response. Here, we investigated the neural pathways involved in nonconscious activation of conditioned pain responses, using functional magnetic resonance imaging in healthy participants. Nonconscious compared with conscious activation of conditioned placebo analgesia was associated with increased activation of the orbitofrontal cortex, a structure with direct connections to affective brain regions and basic reward processing. During nonconscious nocebo, there was increased activation of the thalamus, amygdala, and hippocampus. In contrast to previous assumptions about conditioning in humans, our results show that conditioned pain responses can be elicited independently of conscious awareness and our results suggest a hierarchical activation of neural pathways for nonconscious and conscious conditioned responses. Demonstrating that the human brain has a nonconscious mechanism for responding to conditioned cues has major implications for the role of associative learning in behavioral medicine and psychiatry. Our results may also open up for novel approaches to translational animal-to-human research since human consciousness and animal cognition is an inherent paradox in all behavioral science. © The Author 2014. Published by Oxford University Press.

  4. Validation of Shared and Specific Independent Component Analysis (SSICA) for Between-Group Comparisons in fMRI

    PubMed Central

    Maneshi, Mona; Vahdat, Shahabeddin; Gotman, Jean; Grova, Christophe

    2016-01-01

    Independent component analysis (ICA) has been widely used to study functional magnetic resonance imaging (fMRI) connectivity. However, the application of ICA in multi-group designs is not straightforward. We have recently developed a new method named “shared and specific independent component analysis” (SSICA) to perform between-group comparisons in the ICA framework. SSICA is sensitive to extract those components which represent a significant difference in functional connectivity between groups or conditions, i.e., components that could be considered “specific” for a group or condition. Here, we investigated the performance of SSICA on realistic simulations, and task fMRI data and compared the results with one of the state-of-the-art group ICA approaches to infer between-group differences. We examined SSICA robustness with respect to the number of allowable extracted specific components and between-group orthogonality assumptions. Furthermore, we proposed a modified formulation of the back-reconstruction method to generate group-level t-statistics maps based on SSICA results. We also evaluated the consistency and specificity of the extracted specific components by SSICA. The results on realistic simulated and real fMRI data showed that SSICA outperforms the regular group ICA approach in terms of reconstruction and classification performance. We demonstrated that SSICA is a powerful data-driven approach to detect patterns of differences in functional connectivity across groups/conditions, particularly in model-free designs such as resting-state fMRI. Our findings in task fMRI show that SSICA confirms results of the general linear model (GLM) analysis and when combined with clustering analysis, it complements GLM findings by providing additional information regarding the reliability and specificity of networks. PMID:27729843

  5. Detector-device-independent quantum key distribution: Security analysis and fast implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boaron, Alberto; Korzh, Boris; Houlmann, Raphael

    One of the most pressing issues in quantum key distribution (QKD) is the problem of detector side-channel attacks. To overcome this problem, researchers proposed an elegant “time-reversal” QKD protocol called measurement-device-independent QKD (MDI-QKD), which is based on time-reversed entanglement swapping. But, MDI-QKD is more challenging to implement than standard point-to-point QKD. Recently, we proposed an intermediary QKD protocol called detector-device-independent QKD (DDI-QKD) in order to overcome the drawbacks of MDI-QKD, with the hope that it would eventually lead to a more efficient detector side-channel-free QKD system. We analyze the security of DDI-QKD and elucidate its security assumptions. We find thatmore » DDI-QKD is not equivalent to MDI-QKD, but its security can be demonstrated with reasonable assumptions. On the more practical side, we consider the feasibility of DDI-QKD and present a fast experimental demonstration (clocked at 625 MHz), capable of secret key exchange up to more than 90 km.« less

  6. A novel structure-aware sparse learning algorithm for brain imaging genetics.

    PubMed

    Du, Lei; Jingwen, Yan; Kim, Sungeun; Risacher, Shannon L; Huang, Heng; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li

    2014-01-01

    Brain imaging genetics is an emergent research field where the association between genetic variations such as single nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is evaluated. Sparse canonical correlation analysis (SCCA) is a bi-multivariate analysis method that has the potential to reveal complex multi-SNP-multi-QT associations. Most existing SCCA algorithms are designed using the soft threshold strategy, which assumes that the features in the data are independent from each other. This independence assumption usually does not hold in imaging genetic data, and thus inevitably limits the capability of yielding optimal solutions. We propose a novel structure-aware SCCA (denoted as S2CCA) algorithm to not only eliminate the independence assumption for the input data, but also incorporate group-like structure in the model. Empirical comparison with a widely used SCCA implementation, on both simulated and real imaging genetic data, demonstrated that S2CCA could yield improved prediction performance and biologically meaningful findings.

  7. Detector-device-independent quantum key distribution: Security analysis and fast implementation

    DOE PAGES

    Boaron, Alberto; Korzh, Boris; Houlmann, Raphael; ...

    2016-08-09

    One of the most pressing issues in quantum key distribution (QKD) is the problem of detector side-channel attacks. To overcome this problem, researchers proposed an elegant “time-reversal” QKD protocol called measurement-device-independent QKD (MDI-QKD), which is based on time-reversed entanglement swapping. But, MDI-QKD is more challenging to implement than standard point-to-point QKD. Recently, we proposed an intermediary QKD protocol called detector-device-independent QKD (DDI-QKD) in order to overcome the drawbacks of MDI-QKD, with the hope that it would eventually lead to a more efficient detector side-channel-free QKD system. We analyze the security of DDI-QKD and elucidate its security assumptions. We find thatmore » DDI-QKD is not equivalent to MDI-QKD, but its security can be demonstrated with reasonable assumptions. On the more practical side, we consider the feasibility of DDI-QKD and present a fast experimental demonstration (clocked at 625 MHz), capable of secret key exchange up to more than 90 km.« less

  8. A complete graphical criterion for the adjustment formula in mediation analysis.

    PubMed

    Shpitser, Ilya; VanderWeele, Tyler J

    2011-03-04

    Various assumptions have been used in the literature to identify natural direct and indirect effects in mediation analysis. These effects are of interest because they allow for effect decomposition of a total effect into a direct and indirect effect even in the presence of interactions or non-linear models. In this paper, we consider the relation and interpretation of various identification assumptions in terms of causal diagrams interpreted as a set of non-parametric structural equations. We show that for such causal diagrams, two sets of assumptions for identification that have been described in the literature are in fact equivalent in the sense that if either set of assumptions holds for all models inducing a particular causal diagram, then the other set of assumptions will also hold for all models inducing that diagram. We moreover build on prior work concerning a complete graphical identification criterion for covariate adjustment for total effects to provide a complete graphical criterion for using covariate adjustment to identify natural direct and indirect effects. Finally, we show that this criterion is equivalent to the two sets of independence assumptions used previously for mediation analysis.

  9. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  10. Bayesian soft X-ray tomography using non-stationary Gaussian Processes.

    PubMed

    Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  11. One dimensional heavy ion beam transport: Energy independent model. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Farhat, Hamidullah

    1990-01-01

    Attempts are made to model the transport problem for heavy ion beams in various targets, employing the current level of understanding of the physics of high-charge and energy (HZE) particle interaction with matter are made. An energy independent transport model, with the most simplified assumptions and proper parameters is presented. The first and essential assumption in this case (energy independent transport) is the high energy characterization of the incident beam. The energy independent equation is solved and application is made to high energy neon (NE-20) and iron (FE-56) beams in water. The numerical solutions is given and compared to a numerical solution to determine the accuracy of the model. The lower limit energy for neon and iron to be high energy beams is calculated due to Barkas and Burger theory by LBLFRG computer program. The calculated values in the density range of interest (50 g/sq cm) of water are: 833.43 MeV/nuc for neon and 1597.68 MeV/nuc for iron. The analytical solutions of the energy independent transport equation gives the flux of different collision terms. The fluxes of individual collision terms are given and the total fluxes are shown in graphs relative to different thicknesses of water. The values for fluxes are calculated by the ANASTP computer code.

  12. Source-Device-Independent Ultrafast Quantum Random Number Generation.

    PubMed

    Marangon, Davide G; Vallone, Giuseppe; Villoresi, Paolo

    2017-02-10

    Secure random numbers are a fundamental element of many applications in science, statistics, cryptography and more in general in security protocols. We present a method that enables the generation of high-speed unpredictable random numbers from the quadratures of an electromagnetic field without any assumption on the input state. The method allows us to eliminate the numbers that can be predicted due to the presence of classical and quantum side information. In particular, we introduce a procedure to estimate a bound on the conditional min-entropy based on the entropic uncertainty principle for position and momentum observables of infinite dimensional quantum systems. By the above method, we experimentally demonstrated the generation of secure true random bits at a rate greater than 1.7 Gbit/s.

  13. Is a time symmetric interpretation of quantum theory possible without retrocausality?

    NASA Astrophysics Data System (ADS)

    Leifer, Matthew S.; Pusey, Matthew F.

    2017-06-01

    Huw Price has proposed an argument that suggests a time symmetric ontology for quantum theory must necessarily be retrocausal, i.e. it must involve influences that travel backwards in time. One of Price's assumptions is that the quantum state is a state of reality. However, one of the reasons for exploring retrocausality is that it offers the potential for evading the consequences of no-go theorems, including recent proofs of the reality of the quantum state. Here, we show that this assumption can be replaced by a different assumption, called λ-mediation, that plausibly holds independently of the status of the quantum state. We also reformulate the other assumptions behind the argument to place them in a more general framework and pin down the notion of time symmetry involved more precisely. We show that our assumptions imply a timelike analogue of Bell's local causality criterion and, in doing so, give a new interpretation of timelike violations of Bell inequalities. Namely, they show the impossibility of a (non-retrocausal) time symmetric ontology.

  14. Causal discovery and inference: concepts and recent methodological advances.

    PubMed

    Spirtes, Peter; Zhang, Kun

    This paper aims to give a broad coverage of central concepts and principles involved in automated causal inference and emerging approaches to causal discovery from i.i.d data and from time series. After reviewing concepts including manipulations, causal models, sample predictive modeling, causal predictive modeling, and structural equation models, we present the constraint-based approach to causal discovery, which relies on the conditional independence relationships in the data, and discuss the assumptions underlying its validity. We then focus on causal discovery based on structural equations models, in which a key issue is the identifiability of the causal structure implied by appropriately defined structural equation models: in the two-variable case, under what conditions (and why) is the causal direction between the two variables identifiable? We show that the independence between the error term and causes, together with appropriate structural constraints on the structural equation, makes it possible. Next, we report some recent advances in causal discovery from time series. Assuming that the causal relations are linear with nonGaussian noise, we mention two problems which are traditionally difficult to solve, namely causal discovery from subsampled data and that in the presence of confounding time series. Finally, we list a number of open questions in the field of causal discovery and inference.

  15. A rigorous multiple independent binding site model for determining cell-based equilibrium dissociation constants.

    PubMed

    Drake, Andrew W; Klakamp, Scott L

    2007-01-10

    A new 4-parameter nonlinear equation based on the standard multiple independent binding site model (MIBS) is presented for fitting cell-based ligand titration data in order to calculate the ligand/cell receptor equilibrium dissociation constant and the number of receptors/cell. The most commonly used linear (Scatchard Plot) or nonlinear 2-parameter model (a single binding site model found in commercial programs like Prism(R)) used for analysis of ligand/receptor binding data assumes only the K(D) influences the shape of the titration curve. We demonstrate using simulated data sets that, depending upon the cell surface receptor expression level, the number of cells titrated, and the magnitude of the K(D) being measured, this assumption of always being under K(D)-controlled conditions can be erroneous and can lead to unreliable estimates for the binding parameters. We also compare and contrast the fitting of simulated data sets to the commonly used cell-based binding equation versus our more rigorous 4-parameter nonlinear MIBS model. It is shown through these simulations that the new 4-parameter MIBS model, when used for cell-based titrations under optimal conditions, yields highly accurate estimates of all binding parameters and hence should be the preferred model to fit cell-based experimental nonlinear titration data.

  16. Estimation of infection prevalence and sensitivity in a stratified two-stage sampling design employing highly specific diagnostic tests when there is no gold standard.

    PubMed

    Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S

    2015-11-10

    In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Causality and headache triggers

    PubMed Central

    Turner, Dana P.; Smitherman, Todd A.; Martin, Vincent T.; Penzien, Donald B.; Houle, Timothy T.

    2013-01-01

    Objective The objective of this study was to explore the conditions necessary to assign causal status to headache triggers. Background The term “headache trigger” is commonly used to label any stimulus that is assumed to cause headaches. However, the assumptions required for determining if a given stimulus in fact has a causal-type relationship in eliciting headaches have not been explicated. Methods A synthesis and application of Rubin’s Causal Model is applied to the context of headache causes. From this application the conditions necessary to infer that one event (trigger) causes another (headache) are outlined using basic assumptions and examples from relevant literature. Results Although many conditions must be satisfied for a causal attribution, three basic assumptions are identified for determining causality in headache triggers: 1) constancy of the sufferer; 2) constancy of the trigger effect; and 3) constancy of the trigger presentation. A valid evaluation of a potential trigger’s effect can only be undertaken once these three basic assumptions are satisfied during formal or informal studies of headache triggers. Conclusions Evaluating these assumptions is extremely difficult or infeasible in clinical practice, and satisfying them during natural experimentation is unlikely. Researchers, practitioners, and headache sufferers are encouraged to avoid natural experimentation to determine the causal effects of headache triggers. Instead, formal experimental designs or retrospective diary studies using advanced statistical modeling techniques provide the best approaches to satisfy the required assumptions and inform causal statements about headache triggers. PMID:23534872

  18. Condition trees as a mechanism for communicating the meaning of uncertainties

    NASA Astrophysics Data System (ADS)

    Beven, Keith

    2015-04-01

    Uncertainty communication for environmental problems is fraught with difficulty for good epistemic reasons. The fact that most sources of uncertainty are subject to, and often dominated by, epistemic uncertainties means that the unthinking use of probability theory might actually be misleading and lead to false inference (even in some cases where the assumptions of a probabilistic error model might seem to be reasonably valid). This therefore creates problems in communicating the meaning of probabilistic uncertainties of model predictions to potential users (there are many examples in hydrology, hydraulics, climate change and other domains). It is suggested that one way of being more explicit about the meaning of uncertainties is to associate each type of application with a condition tree of assumptions that need to be made in producing an estimate of uncertainty. The condition tree then provides a basis for discussion and communication of assumptions about uncertainties with users. Agreement of assumptions (albeit generally at some institutional level) will provide some buy-in on the part of users, and a basis for commissioning of future studies. Even in some relatively well-defined problems, such as mapping flood risk, such a condition tree can be rather extensive, but by making each step in the tree explicit then an audit trail is established for future reference. This can act to provide focus in the exercise of agreeing more realistic assumptions.

  19. New Schools for the Cities: Designs for Equality and Excellence. A Working Paper prepared for the Citizens' Crusade Against Poverty.

    ERIC Educational Resources Information Center

    Pressman, Harvey

    This paper outlines several schemes for developing quality private schools for inner city students. The basic assumption justifying the proposal that such schools be independently managed is that the urban public school systems have patently failed to educate poor children. Therefore, a new national network of independent schools should be…

  20. Life Management: Moving Out! Solving Practical Problems for Independent Living. Utah Home Economics and Family Life Curriculum Guide.

    ERIC Educational Resources Information Center

    Utah State Office of Education, Salt Lake City.

    This guide, which has been developed for Utah's home economics and family life education program, contains materials for use in teaching a life management course emphasizing the problem-solving skills required for independent living. Discussed first are the assumptions underlying the curriculum, development of the guide, and suggestions for its…

  1. Metabolic and Respiratory Costs of Increasing Song Amplitude in Zebra Finches

    PubMed Central

    Zollinger, Sue Anne; Goller, Franz; Brumm, Henrik

    2011-01-01

    Bird song is a widely used model in the study of animal communication and sexual selection, and several song features have been shown to reflect the quality of the singer. Recent studies have demonstrated that song amplitude may be an honest signal of current condition in males and that females prefer high amplitude songs. In addition, birds raise the amplitude of their songs to communicate in noisy environments. Although it is generally assumed that louder song should be more costly to produce, there has been little empirical evidence to support this assumption. We tested the assumption by measuring oxygen consumption and respiratory patterns in adult male zebra finches (Taeniopygia guttata) singing at different amplitudes in different background noise conditions. As background noise levels increased, birds significantly increased the sound pressure level of their songs. We found that louder songs required significantly greater subsyringeal air sac pressure than quieter songs. Though increased pressure is probably achieved by increasing respiratory muscle activity, these increases did not correlate with measurable increases in oxygen consumption. In addition, we found that oxygen consumption increased in higher background noise, independent of singing behaviour. This observation supports previous research in mammals showing that high levels of environmental noise can induce physiological stress responses. While our study did not find that increasing vocal amplitude increased metabolic costs, further research is needed to determine whether there are other non-metabolic costs of singing louder or costs associated with chronic noise exposure. PMID:21915258

  2. On discharge from poppet valves: effects of pressure and system dynamics

    NASA Astrophysics Data System (ADS)

    Winroth, P. M.; Ford, C. L.; Alfredsson, P. H.

    2018-02-01

    Simplified flow models are commonly used to design and optimize internal combustion engine systems. The exhaust valves and ports are modelled as straight pipe flows with a corresponding discharge coefficient. The discharge coefficient is usually determined from steady-flow experiments at low pressure ratios and at fixed valve lifts. The inherent assumptions are that the flow through the valve is insensitive to the pressure ratio and may be considered as quasi-steady. The present study challenges these two assumptions through experiments at varying pressure ratios and by comparing measurements of the discharge coefficient obtained under steady and dynamic conditions. Steady flow experiments were performed in a flow bench, whereas the dynamic measurements were performed on a pressurized, 2 l, fixed volume cylinder with one or two moving valves. In the latter experiments an initial pressure (in the range 300-500 kPa) was established whereafter the valve(s) was opened with a lift profile corresponding to different equivalent engine speeds (in the range 800-1350 rpm). The experiments were only concerned with the blowdown phase, i.e. the initial part of the exhaustion process since no piston was simulated. The results show that the process is neither pressure-ratio independent nor quasi-steady. A measure of the "steadiness" has been defined, relating the relative change in the open flow area of the valve to the relative change of flow conditions in the cylinder, a measure that indicates if the process can be regarded as quasi-steady or not.

  3. Do you really represent my task? Sequential adaptation effects to unexpected events support referential coding for the joint Simon effect.

    PubMed

    Klempova, Bibiana; Liepelt, Roman

    2016-07-01

    Recent findings suggest that a Simon effect (SE) can be induced in Individual go/nogo tasks when responding next to an event-producing object salient enough to provide a reference for the spatial coding of one's own action. However, there is skepticism against referential coding for the joint Simon effect (JSE) by proponents of task co-representation. In the present study, we tested assumptions of task co-representation and referential coding by introducing unexpected double response events in a joint go/nogo and a joint independent go/nogo task. In Experiment 1b, we tested if task representations are functionally similar in joint and standard Simon tasks. In Experiment 2, we tested sequential updating of task co-representation after unexpected single response events in the joint independent go/nogo task. Results showed increased JSEs following unexpected events in the joint go/nogo and joint independent go/nogo task (Experiment 1a). While the former finding is in line with the assumptions made by both accounts (task co-representation and referential coding), the latter finding supports referential coding. In contrast to Experiment 1a, we found a decreased SE after unexpected events in the standard Simon task (Experiment 1b), providing evidence against the functional equivalence assumption between joint and two-choice Simon tasks of the task co-representation account. Finally, we found an increased JSE also following unexpected single response events (Experiment 2), ruling out that the findings of the joint independent go/nogo task in Experiment 1a were due to a re-conceptualization of the task situation. In conclusion, our findings support referential coding also for the joint Simon effect.

  4. The Infeasibility of Experimental Quantification of Life-Critical Software Reliability

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The key assumption of software fault tolerance|separately programmed versions fail independently|is shown to be problematic. This assumption cannot be justified by experimentation in the ultra-reliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multi-version software experiments support this affirmation.

  5. ψ -ontology result without the Cartesian product assumption

    NASA Astrophysics Data System (ADS)

    Myrvold, Wayne C.

    2018-05-01

    We introduce a weakening of the preparation independence postulate of Pusey et al. [Nat. Phys. 8, 475 (2012), 10.1038/nphys2309] that does not presuppose that the space of ontic states resulting from a product-state preparation can be represented by the Cartesian product of subsystem state spaces. On the basis of this weakened assumption, it is shown that, in any model that reproduces the quantum probabilities, any pair of pure quantum states |ψ >,|ϕ > with <ϕ |ψ > ≤1 /√{2 } must be ontologically distinct.

  6. Existence of ``free will'' as a problem of physics

    NASA Astrophysics Data System (ADS)

    Peres, Asher

    1986-06-01

    The proof of Bell's inequality is based on the assumption that distant observers can freely and independently choose their experiments. As Bell's inequality is experimentally violated, it appears that distant physical systems may behave as a single, nonlocal, indivisible entity. This apparent contradiction is resolved. It is shown that the “free will” assumption is, under usual circumstances, an excellent approximation. I have set before you life and death, blessing and cursing: therefore choose life.... — Deuteronomy XXX, 19

  7. On the assumptions underlying milestoning.

    PubMed

    Vanden-Eijnden, Eric; Venturoli, Maddalena; Ciccotti, Giovanni; Elber, Ron

    2008-11-07

    Milestoning is a procedure to compute the time evolution of complicated processes such as barrier crossing events or long diffusive transitions between predefined states. Milestoning reduces the dynamics to transition events between intermediates (the milestones) and computes the local kinetic information to describe these transitions via short molecular dynamics (MD) runs between the milestones. The procedure relies on the ability to reinitialize MD trajectories on the milestones to get the right kinetic information about the transitions. It also rests on the assumptions that the transition events between successive milestones and the time lags between these transitions are statistically independent. In this paper, we analyze the validity of these assumptions. We show that sets of optimal milestones exist, i.e., sets such that successive transitions are indeed statistically independent. The proof of this claim relies on the results of transition path theory and uses the isocommittor surfaces of the reaction as milestones. For systems in the overdamped limit, we also obtain the probability distribution to reinitialize the MD trajectories on the milestones, and we discuss why this distribution is not available in closed form for systems with inertia. We explain why the time lags between transitions are not statistically independent even for optimal milestones, but we show that working with such milestones allows one to compute mean first passage times between milestones exactly. Finally, we discuss some practical implications of our results and we compare milestoning with Markov state models in view of our findings.

  8. Random Walks on Homeo( S 1)

    NASA Astrophysics Data System (ADS)

    Malicet, Dominique

    2017-12-01

    In this paper, we study random walks {g_n=f_{n-1}\\ldots f_0} on the group Homeo ( S 1) of the homeomorphisms of the circle, where the homeomorphisms f k are chosen randomly, independently, with respect to a same probability measure {ν}. We prove that under the only condition that there is no probability measure invariant by {ν}-almost every homeomorphism, the random walk almost surely contracts small intervals. It generalizes what has been known on this subject until now, since various conditions on {ν} were imposed in order to get the phenomenon of contractions. Moreover, we obtain the surprising fact that the rate of contraction is exponential, even in the lack of assumptions of smoothness on the f k 's. We deduce various dynamical consequences on the random walk ( g n ): finiteness of ergodic stationary measures, distribution of the trajectories, asymptotic law of the evaluations, etc. The proof of the main result is based on a modification of the Ávila-Viana's invariance principle, working for continuous cocycles on a space fibred in circles.

  9. Quantile Regression Models for Current Status Data

    PubMed Central

    Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen

    2016-01-01

    Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307

  10. Detector-device-independent quantum key distribution: Security analysis and fast implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boaron, Alberto; Korzh, Boris; Boso, Gianluca

    One of the most pressing issues in quantum key distribution (QKD) is the problem of detector side-channel attacks. To overcome this problem, researchers proposed an elegant “time-reversal” QKD protocol called measurement-device-independent QKD (MDI-QKD), which is based on time-reversed entanglement swapping. However, MDI-QKD is more challenging to implement than standard point-to-point QKD. Recently, an intermediary QKD protocol called detector-device-independent QKD (DDI-QKD) has been proposed to overcome the drawbacks of MDI-QKD, with the hope that it would eventually lead to a more efficient detector side-channel-free QKD system. Here, we analyze the security of DDI-QKD and elucidate its security assumptions. We find thatmore » DDI-QKD is not equivalent to MDI-QKD, but its security can be demonstrated with reasonable assumptions. On the more practical side, we consider the feasibility of DDI-QKD and present a fast experimental demonstration (clocked at 625 MHz), capable of secret key exchange up to more than 90 km.« less

  11. On the security of semi-device-independent QKD protocols

    NASA Astrophysics Data System (ADS)

    Chaturvedi, Anubhav; Ray, Maharshi; Veynar, Ryszard; Pawłowski, Marcin

    2018-06-01

    While fully device-independent security in (BB84-like) prepare-and-measure quantum key distribution (QKD) is impossible, it can be guaranteed against individual attacks in a semi-device-independent (SDI) scenario, wherein no assumptions are made on the characteristics of the hardware used except for an upper bound on the dimension of the communicated system. Studying security under such minimal assumptions is especially relevant in the context of the recent quantum hacking attacks wherein the eavesdroppers can not only construct the devices used by the communicating parties but are also able to remotely alter their behavior. In this work, we study the security of a SDIQKD protocol based on the prepare-and-measure quantum implementation of a well-known cryptographic primitive, the random access code (RAC). We consider imperfect detectors and establish the critical values of the security parameters (the observed success probability of the RAC and the detection efficiency) required for guaranteeing security against eavesdroppers with and without quantum memory. Furthermore, we suggest a minimal characterization of the preparation device in order to lower the requirements for establishing a secure key.

  12. Approach to atmospheric laser-propagation theory based on the extended Huygens-Fresnel principle and a self-consistency concept.

    PubMed

    Bochove, Erik J; Rao Gudimetla, V S

    2017-01-01

    We propose a self-consistency condition based on the extended Huygens-Fresnel principle, which we apply to the propagation kernel of the mutual coherence function of a partially coherent laser beam propagating through a turbulent atmosphere. The assumption of statistical independence of turbulence in neighboring propagation segments leads to an integral equation in the propagation kernel. This integral equation is satisfied by a Gaussian function, with dependence on the transverse coordinates that is identical to the previous Gaussian formulation by Yura [Appl. Opt.11, 1399 (1972)APOPAI0003-693510.1364/AO.11.001399], but differs in the transverse coherence length's dependence on propagation distance, so that this established version violates our self-consistency principle. Our formulation has one free parameter, which in the context of Kolmogorov's theory is independent of turbulence strength and propagation distance. We determined its value by numerical fitting to the rigorous beam propagation theory of Yura and Hanson [J. Opt. Soc. Am. A6, 564 (1989)JOAOD60740-323210.1364/JOSAA.6.000564], demonstrating in addition a significant improvement over other Gaussian models.

  13. Optimum flight paths of turbojet aircraft

    NASA Technical Reports Server (NTRS)

    Miele, Angelo

    1955-01-01

    The climb of turbojet aircraft is analyzed and discussed including the accelerations. Three particular flight performances are examined: minimum time of climb, climb with minimum fuel consumption, and steepest climb. The theoretical results obtained from a previous study are put in a form that is suitable for application on the following simplifying assumptions: the Mach number is considered an independent variable instead of the velocity; the variations of the airplane mass due to fuel consumption are disregarded; the airplane polar is assumed to be parabolic; the path curvatures and the squares of the path angles are disregarded in the projection of the equation of motion on the normal to the path; lastly, an ideal turbojet with performance independent of the velocity is involved. The optimum Mach number for each flight condition is obtained from the solution of a sixth order equation in which the coefficients are functions of two fundamental parameters: the ratio of minimum drag in level flight to the thrust and the Mach number which represents the flight at constant altitude and maximum lift-drag ratio.

  14. The impact of covariance misspecification in multivariate Gaussian mixtures on estimation and inference: an application to longitudinal modeling.

    PubMed

    Heggeseth, Brianna C; Jewell, Nicholas P

    2013-07-20

    Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Spectrally-Invariant Approximation Within Atmospheric Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Marshak, A.; Knyazikhin, Y.; Chiu, J. C.; Wiscombe, W. J.

    2011-01-01

    Certain algebraic combinations of single scattering albedo and solar radiation reflected from, or transmitted through, vegetation canopies do not vary with wavelength. These "spectrally invariant relationships" are the consequence of wavelength independence of the extinction coefficient and scattering phase function in vegetation. In general, this wavelength independence does not hold in the atmosphere, but in clouddominated atmospheres the total extinction and total scattering phase function vary only weakly with wavelength. This paper identifies the atmospheric conditions under which the spectrally invariant approximation can accurately describe the extinction. and scattering properties of cloudy atmospheres. The validity of the assumptions and the accuracy of the approximation are tested with ID radiative transfer calculations using publicly available radiative transfer models: Discrete Ordinate Radiative Transfer (DISORT) and Santa Barbara DISORT Atmospheric Radiative Transfer (SBDART). It is shown for cloudy atmospheres with cloud optical depth above 3, and for spectral intervals that exclude strong water vapor absorption, that the spectrally invariant relationships found in vegetation canopy radiative transfer are valid to better than 5%. The physics behind this phenomenon, its mathematical basis, and possible applications to remote sensing and climate are discussed.

  16. Valid statistical inference methods for a case-control study with missing data.

    PubMed

    Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun

    2018-04-01

    The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.

  17. Effects of turbulence modelling on prediction of flow characteristics in a bench-scale anaerobic gas-lift digester.

    PubMed

    Coughtrie, A R; Borman, D J; Sleigh, P A

    2013-06-01

    Flow in a gas-lift digester with a central draft-tube was investigated using computational fluid dynamics (CFD) and different turbulence closure models. The k-ω Shear-Stress-Transport (SST), Renormalization-Group (RNG) k-∊, Linear Reynolds-Stress-Model (RSM) and Transition-SST models were tested for a gas-lift loop reactor under Newtonian flow conditions validated against published experimental work. The results identify that flow predictions within the reactor (where flow is transitional) are particularly sensitive to the turbulence model implemented; the Transition-SST model was found to be the most robust for capturing mixing behaviour and predicting separation reliably. Therefore, Transition-SST is recommended over k-∊ models for use in comparable mixing problems. A comparison of results obtained using multiphase Euler-Lagrange and singlephase approaches are presented. The results support the validity of the singlephase modelling assumptions in obtaining reliable predictions of the reactor flow. Solver independence of results was verified by comparing two independent finite-volume solvers (Fluent-13.0sp2 and OpenFOAM-2.0.1). Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Is a time symmetric interpretation of quantum theory possible without retrocausality?

    PubMed Central

    Pusey, Matthew F.

    2017-01-01

    Huw Price has proposed an argument that suggests a time symmetric ontology for quantum theory must necessarily be retrocausal, i.e. it must involve influences that travel backwards in time. One of Price's assumptions is that the quantum state is a state of reality. However, one of the reasons for exploring retrocausality is that it offers the potential for evading the consequences of no-go theorems, including recent proofs of the reality of the quantum state. Here, we show that this assumption can be replaced by a different assumption, called λ-mediation, that plausibly holds independently of the status of the quantum state. We also reformulate the other assumptions behind the argument to place them in a more general framework and pin down the notion of time symmetry involved more precisely. We show that our assumptions imply a timelike analogue of Bell's local causality criterion and, in doing so, give a new interpretation of timelike violations of Bell inequalities. Namely, they show the impossibility of a (non-retrocausal) time symmetric ontology. PMID:28690401

  19. Lods, wrods, and mods: the interpretation of lod scores calculated under different models.

    PubMed

    Hodge, S E; Elston, R C

    1994-01-01

    In this paper we examine the relationships among classical lod scores, "wrod" scores (lod scores calculated under the wrong genetic model), and "mod" scores (lod scores maximized over genetic model parameters). We compare the behavior of these scores when the state of nature is linkage to their behavior when the state of nature is no linkage. We describe sufficient conditions for mod scores to be valid and discuss their use to determine the correct genetic model. We show that lod scores represent a likelihood-ratio test for independence. We explain the "ascertainment-assumption-free" aspect of using mod scores to determine mode of inheritance and we set this aspect into a well-established statistical framework. Finally, we summarize practical guidelines for the use of mod scores.

  20. On an adaptive preconditioned Crank-Nicolson MCMC algorithm for infinite dimensional Bayesian inference

    NASA Astrophysics Data System (ADS)

    Hu, Zixi; Yao, Zhewei; Li, Jinglai

    2017-03-01

    Many scientific and engineering problems require to perform Bayesian inference for unknowns of infinite dimension. In such problems, many standard Markov Chain Monte Carlo (MCMC) algorithms become arbitrary slow under the mesh refinement, which is referred to as being dimension dependent. To this end, a family of dimensional independent MCMC algorithms, known as the preconditioned Crank-Nicolson (pCN) methods, were proposed to sample the infinite dimensional parameters. In this work we develop an adaptive version of the pCN algorithm, where the covariance operator of the proposal distribution is adjusted based on sampling history to improve the simulation efficiency. We show that the proposed algorithm satisfies an important ergodicity condition under some mild assumptions. Finally we provide numerical examples to demonstrate the performance of the proposed method.

  1. Flood Protection Decision Making Within a Coupled Human and Natural System

    NASA Astrophysics Data System (ADS)

    O'Donnell, Greg; O'Connell, Enda

    2013-04-01

    Due to the perceived threat from climate change, prediction under changing climatic and hydrological conditions has become a dominant theme of hydrological research. Much of this research has been climate model-centric, in which GCM/RCM climate projections have been used to drive hydrological system models to explore potential impacts that should inform adaptation decision-making. However, adaptation fundamentally involves how humans may respond to increasing flood and drought hazards by changing their strategies, activities and behaviours which are coupled in complex ways to the natural systems within which they live and work. Humans are major agents of change in hydrological systems, and representing human activities and behaviours in coupled human and natural hydrological system models is needed to gain insight into the complex interactions that take place, and to inform adaptation decision-making. Governments and their agencies are under pressure to make proactive investments to protect people living in floodplains from the perceived increasing flood hazard. However, adopting this as a universal strategy everywhere is not affordable, particularly in times of economic stringency and given uncertainty about future climatic conditions. It has been suggested that the assumption of stationarity, which has traditionally been invoked in making hydrological risk assessments, is no longer tenable. However, before the assumption of hydrologic nonstationarity is accepted, the ability to cope with the uncertain impacts of global warming on water management via the operational assumption of hydrologic stationarity should be carefully examined. Much can be learned by focussing on natural climate variability and its inherent changes in assessing alternative adaptation strategies. A stationary stochastic multisite flood hazard model has been developed that can exhibit increasing variability/persistence in annual maximum floods, starting with the traditional assumption of independence. This has been coupled to an agent based model of how various stakeholders interact in determining where and when flood protection investments are made in a hypothetical region with multiple sites at risk from flood hazard. Monte Carlo simulation is used to explore how government agencies with finite resources might best invest in flood protection infrastructure in a highly variable climate with a high degree of future uncertainty. Insight is provided into whether proactive or reactive strategies are to be preferred in an increasingly variable climate.

  2. A Bottom-Up Approach to Understanding Protein Layer Formation at Solid-Liquid Interfaces

    PubMed Central

    Kastantin, Mark; Langdon, Blake B.; Schwartz, Daniel K.

    2014-01-01

    A common goal across different fields (e.g. separations, biosensors, biomaterials, pharmaceuticals) is to understand how protein behavior at solid-liquid interfaces is affected by environmental conditions. Temperature, pH, ionic strength, and the chemical and physical properties of the solid surface, among many factors, can control microscopic protein dynamics (e.g. adsorption, desorption, diffusion, aggregation) that contribute to macroscopic properties like time-dependent total protein surface coverage and protein structure. These relationships are typically studied through a top-down approach in which macroscopic observations are explained using analytical models that are based upon reasonable, but not universally true, simplifying assumptions about microscopic protein dynamics. Conclusions connecting microscopic dynamics to environmental factors can be heavily biased by potentially incorrect assumptions. In contrast, more complicated models avoid several of the common assumptions but require many parameters that have overlapping effects on predictions of macroscopic, average protein properties. Consequently, these models are poorly suited for the top-down approach. Because the sophistication incorporated into these models may ultimately prove essential to understanding interfacial protein behavior, this article proposes a bottom-up approach in which direct observations of microscopic protein dynamics specify parameters in complicated models, which then generate macroscopic predictions to compare with experiment. In this framework, single-molecule tracking has proven capable of making direct measurements of microscopic protein dynamics, but must be complemented by modeling to combine and extrapolate many independent microscopic observations to the macro-scale. The bottom-up approach is expected to better connect environmental factors to macroscopic protein behavior, thereby guiding rational choices that promote desirable protein behaviors. PMID:24484895

  3. Estimating causal effects with a non-paranormal method for the design of efficient intervention experiments

    PubMed Central

    2014-01-01

    Background Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. Results We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Conclusions Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality. PMID:24980787

  4. Estimating causal effects with a non-paranormal method for the design of efficient intervention experiments.

    PubMed

    Teramoto, Reiji; Saito, Chiaki; Funahashi, Shin-ichi

    2014-06-30

    Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality.

  5. Merging cranial histology and 3D-computational biomechanics: a review of the feeding ecology of a Late Triassic temnospondyl amphibian

    PubMed Central

    Gruntmejer, Kamil; Marcé-Nogué, Jordi; Bodzioch, Adam; Fortuny, Josep

    2018-01-01

    Finite Element Analysis (FEA) is a useful method for understanding form and function. However, modelling of fossil taxa invariably involves assumptions as a result of preservation-induced loss of information in the fossil record. To test the validity of predictions from FEA, given such assumptions, these results could be compared to independent lines of evidence for cranial mechanics. In the present study a new concept of using bone microstructure to predict stress distribution in the skull during feeding is put forward and a correlation between bone microstructure and results of computational biomechanics (FEA) is carried out. The bony framework is a product of biological optimisation; bone structure is created to meet local mechanical conditions. To test how well results from FEA correlate to cranial mechanics predicted from bone structure, the well-known temnospondyl Metoposaurus krasiejowensis was used as a model. A crucial issue to Temnospondyli is their feeding mode: did they suction feed or employ direct biting, or both? Metoposaurids have previously been characterised either as active hunters or passive bottom dwellers. In order to test the correlation between results from FEA and bone microstructure, two skulls of Metoposaurus were used, one modelled under FE analyses, while for the second one 17 dermal bone microstructure were analysed. Thus, for the first time, results predicting cranial mechanical behaviour using both methods are merged to understand the feeding strategy of Metoposaurus. Metoposaurus appears to have been an aquatic animal that exhibited a generalist feeding behaviour. This taxon may have used two foraging techniques in hunting; mainly bilateral biting and, to a lesser extent, lateral strikes. However, bone microstructure suggests that lateral biting was more frequent than suggested by Finite Element Analysis (FEA). One of the potential factors that determined its mode of life may have been water levels. During optimum water conditions, metoposaurids may have been more active ambush predators that were capable of lateral strikes of the head. The dry season required a less active mode of life when bilateral biting is particularly efficient. This, combined with their characteristically anteriorly positioned orbits, was optimal for ambush strategy. This ability to use alternative modes of food acquisition, independent of environmental conditions, might hold the key in explaining the very common occurrence of metoposaurids during the Late Triassic. PMID:29503770

  6. Human vision is attuned to the diffuseness of natural light

    PubMed Central

    Morgenstern, Yaniv; Geisler, Wilson S.; Murray, Richard F.

    2014-01-01

    All images are highly ambiguous, and to perceive 3-D scenes, the human visual system relies on assumptions about what lighting conditions are most probable. Here we show that human observers' assumptions about lighting diffuseness are well matched to the diffuseness of lighting in real-world scenes. We use a novel multidirectional photometer to measure lighting in hundreds of environments, and we find that the diffuseness of natural lighting falls in the same range as previous psychophysical estimates of the visual system's assumptions about diffuseness. We also find that natural lighting is typically directional enough to override human observers' assumption that light comes from above. Furthermore, we find that, although human performance on some tasks is worse in diffuse light, this can be largely accounted for by intrinsic task difficulty. These findings suggest that human vision is attuned to the diffuseness levels of natural lighting conditions. PMID:25139864

  7. Astrophysics-independent bounds on the annual modulation of dark matter signals.

    PubMed

    Herrero-Garcia, Juan; Schwetz, Thomas; Zupan, Jure

    2012-10-05

    We show how constraints on the time integrated event rate from a given dark matter (DM) direct detection experiment can be used to bound the amplitude of the annual modulation signal in another experiment. The method requires only mild assumptions about the properties of the local DM distribution: that it is temporally stable on the scale of months and spatially homogeneous on the ecliptic. We apply the method to the annual modulation signal in DAMA/LIBRA, which we compare to the bounds derived from XENON10, XENON100, cryogenic DM search, and SIMPLE data. Assuming a DM mass of 10 GeV, we show that under the above assumptions about the DM halo, a DM interpretation of the DAMA/LIBRA signal is excluded for several classes of models: at 6.3σ (4.6σ) for elastic isospin conserving (violating) spin-independent interactions, and at 4.9σ for elastic spin-dependent interactions on protons.

  8. Zipf's word frequency law in natural language: a critical review and future directions.

    PubMed

    Piantadosi, Steven T

    2014-10-01

    The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf's law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf's law and are then used to evaluate many of the theoretical explanations of Zipf's law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf's law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data.

  9. The ethnic composition of migration in the former Soviet Union.

    PubMed

    Robertson, L R

    1996-02-01

    "This paper examines the impact of the disintegration of the Soviet Union on migration patterns within the newly independent states. Data on migration between Russia and the other 14 former Soviet republics are analyzed to reveal the magnitude and ethnic composition of migration after independence and to examine the assumption that Russians will tend to return to Russia, whereas members of other titular groups will emigrate to their respective newly independent states. The data suggest that nationalization not only pushes non-titular groups to emigrate from the former Soviet republics, but also pulls titular groups to immigrate to the newly independent states from Russia." excerpt

  10. The attenuation of Love waves and toroidal oscillations of the earth.

    NASA Technical Reports Server (NTRS)

    Jackson, D. D.

    1971-01-01

    An attempt has been made to invert a large set of attenuation data for Love waves and toroidal oscillations in the earth, using a recent method by Backus and Gilbert. The difficulty in finding an acceptable model of internal friction which explains the data, under the assumption that the internal friction is independent of frequency, casts doubt on the validity of this assumption. A frequency-dependent model of internal friction is presented which is in good agreement with the seismic data and with recent experimental measurements of attenuation in rocks.

  11. Voluntary control of corticomuscular coherence through neurofeedback: a proof-of-principle study in healthy subjects.

    PubMed

    von Carlowitz-Ghori, K; Bayraktaroglu, Z; Waterstraat, G; Curio, G; Nikulin, V V

    2015-04-02

    Corticomuscular coherence (CMC) relates to synchronization between activity in the motor cortex and the muscle activity. The strength of CMC can be affected by motor behavior. In a proof-of-principle study, we examined whether independent of motor output parameters, healthy subjects are able to voluntarily modulate CMC in a neurofeedback paradigm. Subjects received visual online feedback of their instantaneous CMC strength, which was calculated between an optimized spatial projection of multichannel electroencephalography (EEG) and electromyography (EMG) in an individually defined target frequency range. The neurofeedback training consisted of either increasing or decreasing CMC strength using a self-chosen mental strategy while performing a simple motor task. Evaluation of instantaneous coherence showed that CMC strength was significantly larger when subjects had to increase than when to decrease CMC; this difference between the two task conditions did not depend on motor performance. The exclusion of confounding factors such as motor performance, attention and task complexity in study design provides evidence that subjects were able to voluntarily modify CMC independent of motor output parameters. Additional analysis further strengthened the assumption that the subjects' response was specifically shaped by the neurofeedback. In perspective, we suggest that CMC-based neurofeedback could provide a therapeutic approach in clinical conditions, such as motor stroke, where CMC is altered. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Influence of substrate exposure history on biodegradation in a porous medium

    NASA Astrophysics Data System (ADS)

    Park, J.; Chen, Y.-M.; Kukor, J. J.; Abriola, L. M.

    2001-10-01

    This study investigates the influence of fluctuating toluene concentrations on aerobic toluene degradation in a sandy porous medium colonized with Ralstonia pickettii PKO1. Column effluent toluene concentrations were found to increase after a temporary decrease in influent toluene concentration. Subsequent examination of the spatial gradient of toluene degradative activity in the column suggested that the observed increase in effluent toluene concentrations was attributable to an adverse effect of toluene limitation on the biodegradative activity of attached cells. The traditional Michaelis-Menten-type biodegradation equation associated with batch-measured Vmax (2.26 mg toluene/mg living cell/day) and KS (1.20 mg toluene/l) of nonstarved cells was unable to predict the observed toluene breakthrough behavior when the column had been previously exposed to no-toluene conditions. An alternative modeling approach was developed based upon the assumptions that (i) degradative activity was completely deactivated within the no-toluene exposure period (53.5 h) and (ii) a lag-phase was present prior to the subsequent reactivation of degradative activity in previously toluene-starved cells. These assumptions were independently verified by batch microbial investigations, and the modified model provided a good fit to the same observed toluene breakthrough curve. Application of single lag-time and threshold concentration values, however, failed to predict observed toluene breakthrough under different toluene exposure conditions. Results of this experimental and modeling investigation suggested that substrate exposure history, including the length of the starvation period and the level of substrate concentration, affected the induction of biodegradation in the porous medium.

  13. Effect of body size and body mass on δ 13 C and δ 15 N in coastal fishes and cephalopods

    NASA Astrophysics Data System (ADS)

    Vinagre, C.; Máguas, C.; Cabral, H. N.; Costa, M. J.

    2011-11-01

    Carbon and nitrogen isotopes have been widely used in the investigation of trophic relations, energy pathways, trophic levels and migrations, under the assumption that δ 13C is independent of body size and that variation in δ 15N occurs exclusively due to ontogenetic changes in diet and not body size increase per se. However, several studies have shown that these assumptions are uncertain. Data from food-webs containing an important number of species lack theoretical support on these assumptions because very few species have been tested for δ 13C and δ 15N variation in captivity. However, if sampling comprises a wide range of body sizes from various species, the variation of δ 13C and δ 15N with body size can be investigated. While correlation between body size and δ 13C and δ 15N can be due to ontogenetic diet shifts, stability in such values throughout the size spectrum can be considered an indication that δ 13C and δ 15N in muscle tissues of such species is independent of body size within that size range, and thus the basic assumptions can be applied in the interpretation of such food webs. The present study investigated the variation in muscle δ 13C and δ 15N with body size and body mass of coastal fishes and cephalopods. It was concluded that muscle δ 13C and δ 15N did not vary with body size or mass for all bony fishes with only one exception, the dragonet Callionymus lyra. Muscle δ 13C and δ 15N also did not vary with body size or mass in cartilaginous fishes and cephalopods, meaning that body size/mass per se have no effect on δ 13C or δ 15N, for most species analysed and within the size ranges sampled. The assumption that δ 13C is independent of body size and that variation in δ 15N is not affected by body size increase per se was upheld for most organisms and can be applied to the coastal food web studied taking into account that C. lyra is an exception.

  14. Bridging Cultures with Classroom Strategies.

    ERIC Educational Resources Information Center

    Rothstein-Fisch, Carrie; Greenfield, Patricia M.; Trumbull, Elise

    1999-01-01

    Collectivism, stressing family members' interdependence, is common to Latino cultures. In contrast, schools foster independence and individual achievement. To help teachers understand assumptions underlying these different values, the authors developed the "Bridging Cultures Project" as a research-based professional-development program.…

  15. Effect of source tampering in the security of quantum cryptography

    NASA Astrophysics Data System (ADS)

    Sun, Shi-Hai; Xu, Feihu; Jiang, Mu-Sheng; Ma, Xiang-Chun; Lo, Hoi-Kwong; Liang, Lin-Mei

    2015-08-01

    The security of source has become an increasingly important issue in quantum cryptography. Based on the framework of measurement-device-independent quantum key distribution (MDI-QKD), the source becomes the only region exploitable by a potential eavesdropper (Eve). Phase randomization is a cornerstone assumption in most discrete-variable (DV) quantum communication protocols (e.g., QKD, quantum coin tossing, weak-coherent-state blind quantum computing, and so on), and the violation of such an assumption is thus fatal to the security of those protocols. In this paper, we show a simple quantum hacking strategy, with commercial and homemade pulsed lasers, by Eve that allows her to actively tamper with the source and violate such an assumption, without leaving a trace afterwards. Furthermore, our attack may also be valid for continuous-variable (CV) QKD, which is another main class of QKD protocol, since, excepting the phase random assumption, other parameters (e.g., intensity) could also be changed, which directly determine the security of CV-QKD.

  16. Device-independent security of quantum cryptography against collective attacks.

    PubMed

    Acín, Antonio; Brunner, Nicolas; Gisin, Nicolas; Massar, Serge; Pironio, Stefano; Scarani, Valerio

    2007-06-08

    We present the optimal collective attack on a quantum key distribution protocol in the "device-independent" security scenario, where no assumptions are made about the way the quantum key distribution devices work or on what quantum system they operate. Our main result is a tight bound on the Holevo information between one of the authorized parties and the eavesdropper, as a function of the amount of violation of a Bell-type inequality.

  17. Importance of spatial autocorrelation in modeling bird distributions at a continental scale

    USGS Publications Warehouse

    Bahn, V.; O'Connor, R.J.; Krohn, W.B.

    2006-01-01

    Spatial autocorrelation in species' distributions has been recognized as inflating the probability of a type I error in hypotheses tests, causing biases in variable selection, and violating the assumption of independence of error terms in models such as correlation or regression. However, it remains unclear whether these problems occur at all spatial resolutions and extents, and under which conditions spatially explicit modeling techniques are superior. Our goal was to determine whether spatial models were superior at large extents and across many different species. In addition, we investigated the importance of purely spatial effects in distribution patterns relative to the variation that could be explained through environmental conditions. We studied distribution patterns of 108 bird species in the conterminous United States using ten years of data from the Breeding Bird Survey. We compared the performance of spatially explicit regression models with non-spatial regression models using Akaike's information criterion. In addition, we partitioned the variance in species distributions into an environmental, a pure spatial and a shared component. The spatially-explicit conditional autoregressive regression models strongly outperformed the ordinary least squares regression models. In addition, partialling out the spatial component underlying the species' distributions showed that an average of 17% of the explained variation could be attributed to purely spatial effects independent of the spatial autocorrelation induced by the underlying environmental variables. We concluded that location in the range and neighborhood play an important role in the distribution of species. Spatially explicit models are expected to yield better predictions especially for mobile species such as birds, even in coarse-grained models with a large extent. ?? Ecography.

  18. Feminist Therapy.

    ERIC Educational Resources Information Center

    Laidlaw, Toni; Malmo, Cheryl

    1991-01-01

    Traces roots of feminist therapy and its independence from traditional and prevalent theories and therapy practices. Asserts that Freudian theory and humanistic assumptions are sexist and contribute to powerlessness of women. In contrast, feminist therapy is seen as dealing directly with client-counselor relationships, trust, advocacy, and…

  19. Bell Nonlocality, Signal Locality and Unpredictability (or What Bohr Could Have Told Einstein at Solvay Had He Known About Bell Experiments)

    NASA Astrophysics Data System (ADS)

    Cavalcanti, Eric G.; Wiseman, Howard M.

    2012-10-01

    The 1964 theorem of John Bell shows that no model that reproduces the predictions of quantum mechanics can simultaneously satisfy the assumptions of locality and determinism. On the other hand, the assumptions of signal locality plus predictability are also sufficient to derive Bell inequalities. This simple theorem, previously noted but published only relatively recently by Masanes, Acin and Gisin, has fundamental implications not entirely appreciated. Firstly, nothing can be concluded about the ontological assumptions of locality or determinism independently of each other—it is possible to reproduce quantum mechanics with deterministic models that violate locality as well as indeterministic models that satisfy locality. On the other hand, the operational assumption of signal locality is an empirically testable (and well-tested) consequence of relativity. Thus Bell inequality violations imply that we can trust that some events are fundamentally unpredictable, even if we cannot trust that they are indeterministic. This result grounds the quantum-mechanical prohibition of arbitrarily accurate predictions on the assumption of no superluminal signalling, regardless of any postulates of quantum mechanics. It also sheds a new light on an early stage of the historical debate between Einstein and Bohr.

  20. The Valuation of Scientific and Technical Experiments

    NASA Technical Reports Server (NTRS)

    Williams, F. E.

    1972-01-01

    Rational selection of scientific and technical experiments for space missions is studied. Particular emphasis is placed on the assessment of value or worth of an experiment. A specification procedure is outlined and discussed for the case of one decision maker. Experiments are viewed as multi-attributed entities, and a relevant set of attributes is proposed. Alternative methods of describing levels of the attributes are proposed and discussed. The reasonableness of certain simplifying assumptions such as preferential and utility independence is explored, and it is tentatively concluded that preferential independence applies and utility independence appears to be appropriate.

  1. Parameter estimation in nonlinear distributed systems - Approximation theory and convergence results

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Reich, Simeon; Rosen, I. G.

    1988-01-01

    An abstract approximation framework and convergence theory is described for Galerkin approximations applied to inverse problems involving nonlinear distributed parameter systems. Parameter estimation problems are considered and formulated as the minimization of a least-squares-like performance index over a compact admissible parameter set subject to state constraints given by an inhomogeneous nonlinear distributed system. The theory applies to systems whose dynamics can be described by either time-independent or nonstationary strongly maximal monotonic operators defined on a reflexive Banach space which is densely and continuously embedded in a Hilbert space. It is demonstrated that if readily verifiable conditions on the system's dependence on the unknown parameters are satisfied, and the usual Galerkin approximation assumption holds, then solutions to the approximating problems exist and approximate a solution to the original infinite-dimensional identification problem.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honorio, J.; Goldstein, R.; Honorio, J.

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statisticalmore » theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.« less

  3. a Generic Probabilistic Model and a Hierarchical Solution for Sensor Localization in Noisy and Restricted Conditions

    NASA Astrophysics Data System (ADS)

    Ji, S.; Yuan, X.

    2016-06-01

    A generic probabilistic model, under fundamental Bayes' rule and Markov assumption, is introduced to integrate the process of mobile platform localization with optical sensors. And based on it, three relative independent solutions, bundle adjustment, Kalman filtering and particle filtering are deduced under different and additional restrictions. We want to prove that first, Kalman filtering, may be a better initial-value supplier for bundle adjustment than traditional relative orientation in irregular strips and networks or failed tie-point extraction. Second, in high noisy conditions, particle filtering can act as a bridge for gap binding when a large number of gross errors fail a Kalman filtering or a bundle adjustment. Third, both filtering methods, which help reduce the error propagation and eliminate gross errors, guarantee a global and static bundle adjustment, who requires the strictest initial values and control conditions. The main innovation is about the integrated processing of stochastic errors and gross errors in sensor observations, and the integration of the three most used solutions, bundle adjustment, Kalman filtering and particle filtering into a generic probabilistic localization model. The tests in noisy and restricted situations are designed and examined to prove them.

  4. Topological supersymmetry breaking: The definition and stochastic generalization of chaos and the limit of applicability of statistics

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, Igor V.; Schwartz, Robert N.; Wang, Kang L.

    2016-03-01

    The concept of deterministic dynamical chaos has a long history and is well established by now. Nevertheless, its field theoretic essence and its stochastic generalization have been revealed only very recently. Within the newly found supersymmetric theory of stochastics (STS), all stochastic differential equations (SDEs) possess topological or de Rahm supersymmetry and stochastic chaos is the phenomenon of its spontaneous breakdown. Even though the STS is free of approximations and thus is technically solid, it is still missing a firm interpretational basis in order to be physically sound. Here, we make a few important steps toward the construction of the interpretational foundation for the STS. In particular, we discuss that one way to understand why the ground states of chaotic SDEs are conditional (not total) probability distributions, is that some of the variables have infinite memory of initial conditions and thus are not “thermalized”, i.e., cannot be described by the initial-conditions-independent probability distributions. As a result, the definitive assumption of physical statistics that the ground state is a steady-state total probability distribution is not valid for chaotic SDEs.

  5. Dealing with periodical loads and harmonics in operational modal analysis using time-varying transmissibility functions

    NASA Astrophysics Data System (ADS)

    Weijtjens, Wout; Lataire, John; Devriendt, Christof; Guillaume, Patrick

    2014-12-01

    Periodical loads, such as waves and rotating machinery, form a problem for operational modal analysis (OMA). In OMA only the vibrations of a structure of interest are measured and little to nothing is known about the loads causing these vibrations. Therefore, it is often assumed that all dynamics in the measured data are linked to the system of interest. Periodical loads defy this assumption as their periodical behavior is often visible within the measured vibrations. As a consequence most OMA techniques falsely associate the dynamics of the periodical load with the system of interest. Without additional information about the load, one is not able to correctly differentiate between structural dynamics and the dynamics of the load. In several applications, e.g. turbines and helicopters, it was observed that because of periodical loads one was unable to correctly identify one or multiple modes. Transmissibility based OMA (TOMA) is a completely different approach to OMA. By using transmissibility functions to estimate the structural dynamics of the system of interest, all influence of the load-spectrum can be eliminated. TOMA therefore allows to identify the modal parameters without being influenced by the presence of periodical loads, such as harmonics. One of the difficulties of TOMA is that the analyst is required to find two independent datasets, each associated with a different loading condition of the system of interest. This poses a dilemma for TOMA; how can an analyst identify two different loading conditions when little is known about the loads on the system? This paper tackles that problem by assuming that the loading conditions vary continuously over time, e.g. the changing wind directions. From this assumption TOMA is developed into a time-varying framework. This development allows TOMA to not only cope with the continuously changing loading conditions. The time-varying framework also enables the identification of the modal parameters from a single dataset. Moreover, the time-varying TOMA approach can be implemented in such a way that the analyst no longer has to identify different loading conditions. For these combined reasons the time-varying TOMA is less dependent on the user and requires less testing time than the earlier TOMA-technique.

  6. Independent Review of Simulation of Net Infiltration for Present-Day and Potential Future Climates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Review Panel: Soroosh Sorooshian, Ph.D., Panel Chairperson, University of California, Irvine; Jan M. H. Hendrickx, Ph.D., New Mexico Institute of Mining and Technology; Binayak P. Mohanty, Ph.D., Texas A&M University

    The DOE Office of Civilian Radioactive Waste Management (OCRWM) tasked Oak Ridge Institute for Science and Education (ORISE) with providing an independent expert review of the documented model and prediction results for net infiltration of water into the unsaturated zone at Yucca Mountain. The specific purpose of the model, as documented in the report MDL-NBS-HS-000023, Rev. 01, is “to provide a spatial representation, including epistemic and aleatory uncertainty, of the predicted mean annual net infiltration at the Yucca Mountain site ...” (p. 1-1) The expert review panel assembled by ORISE concluded that the model report does not provide a technicallymore » credible spatial representation of net infiltration at Yucca Mountain. Specifically, the ORISE Review Panel found that: • A critical lack of site-specific meteorological, surface, and subsurface information prevents verification of (i) the net infiltration estimates, (ii) the uncertainty estimates of parameters caused by their spatial variability, and (iii) the assumptions used by the modelers (ranges and distributions) for the characterization of parameters. The paucity of site-specific data used by the modeling team for model implementation and validation is a major deficiency in this effort. • The model does not incorporate at least one potentially important hydrologic process. Subsurface lateral flow is not accounted for by the model, and the assumption that the effect of subsurface lateral flow is negligible is not adequately justified. This issue is especially critical for the wetter climate periods. This omission may be one reason the model results appear to underestimate net infiltration beneath wash environments and therefore imprecisely represent the spatial variability of net infiltration. • While the model uses assumptions consistently, such as uniform soil depths and a constant vegetation rooting depth, such assumptions may not be appropriate for this net infiltration simulation because they oversimplify a complex landscape and associated hydrologic processes, especially since the model assumptions have not been adequately corroborated by field and laboratory observations at Yucca Mountain.« less

  7. Model-independent constraints on dark matter annihilation in dwarf spheroidal galaxies

    NASA Astrophysics Data System (ADS)

    Boddy, Kimberly K.; Kumar, Jason; Marfatia, Danny; Sandick, Pearl

    2018-05-01

    We present a general, model-independent formalism for determining bounds on the production of photons in dwarf spheroidal galaxies via dark matter annihilation, applicable to any set of assumptions about dark matter particle physics or astrophysics. As an illustration, we analyze gamma-ray data from the Fermi Large Area Telescope to constrain a variety of nonstandard dark matter models, several of which have not previously been studied in the context of dwarf galaxy searches.

  8. Independent Research and Independent Exploratory Development FY 1985

    DTIC Science & Technology

    1986-01-01

    whs’tkxwK 3nk ’ --’ ." -•ftFiu... imcnft fli m le- Apanl these individua complete * hic tm rmthstst h frttw tpsi hesltin ndte the correct answer (33%) were...describe itself, graphically indicating a Maintenance System ( TMS ), which design relationship, or examining a maintains a set of assumptions within an...critique comment by visually marking environment. At this point, how to utilize the elements involved, a TMS within Designer has not been clearly , defined

  9. Accuracy Evaluation of the Unified P-Value from Combining Correlated P-Values

    PubMed Central

    Alves, Gelio; Yu, Yi-Kuo

    2014-01-01

    Meta-analysis methods that combine -values into a single unified -value are frequently employed to improve confidence in hypothesis testing. An assumption made by most meta-analysis methods is that the -values to be combined are independent, which may not always be true. To investigate the accuracy of the unified -value from combining correlated -values, we have evaluated a family of statistical methods that combine: independent, weighted independent, correlated, and weighted correlated -values. Statistical accuracy evaluation by combining simulated correlated -values showed that correlation among -values can have a significant effect on the accuracy of the combined -value obtained. Among the statistical methods evaluated those that weight -values compute more accurate combined -values than those that do not. Also, statistical methods that utilize the correlation information have the best performance, producing significantly more accurate combined -values. In our study we have demonstrated that statistical methods that combine -values based on the assumption of independence can produce inaccurate -values when combining correlated -values, even when the -values are only weakly correlated. Therefore, to prevent from drawing false conclusions during hypothesis testing, our study advises caution be used when interpreting the -value obtained from combining -values of unknown correlation. However, when the correlation information is available, the weighting-capable statistical method, first introduced by Brown and recently modified by Hou, seems to perform the best amongst the methods investigated. PMID:24663491

  10. THE MODELING OF THE FATE AND TRANSPORT OF ENVIRONMENTAL POLLUTANTS

    EPA Science Inventory

    Current models that predict the fate of organic compounds released to the environment are based on the assumption that these compounds exist exclusively as neutral species. This assumption is untrue under many environmental conditions, as some molecules can exist as cations, anio...

  11. Wealth distribution on complex networks

    NASA Astrophysics Data System (ADS)

    Ichinomiya, Takashi

    2012-12-01

    We study the wealth distribution of the Bouchaud-Mézard model on complex networks. It is known from numerical simulations that this distribution depends on the topology of the network; however, no one has succeeded in explaining it. Using “adiabatic” and “independent” assumptions along with the central-limit theorem, we derive equations that determine the probability distribution function. The results are compared to those of simulations for various networks. We find good agreement between our theory and the simulations, except for the case of Watts-Strogatz networks with a low rewiring rate due to the breakdown of independent assumption.

  12. Reinforcing loose foundation stones in trait-based plant ecology.

    PubMed

    Shipley, Bill; De Bello, Francesco; Cornelissen, J Hans C; Laliberté, Etienne; Laughlin, Daniel C; Reich, Peter B

    2016-04-01

    The promise of "trait-based" plant ecology is one of generalized prediction across organizational and spatial scales, independent of taxonomy. This promise is a major reason for the increased popularity of this approach. Here, we argue that some important foundational assumptions of trait-based ecology have not received sufficient empirical evaluation. We identify three such assumptions and, where possible, suggest methods of improvement: (i) traits are functional to the degree that they determine individual fitness, (ii) intraspecific variation in functional traits can be largely ignored, and (iii) functional traits show general predictive relationships to measurable environmental gradients.

  13. Effects of rotational symmetry breaking in polymer-coated nanopores

    NASA Astrophysics Data System (ADS)

    Osmanović, D.; Kerr-Winter, M.; Eccleston, R. C.; Hoogenboom, B. W.; Ford, I. J.

    2015-01-01

    The statistical theory of polymers tethered around the inner surface of a cylindrical channel has traditionally employed the assumption that the equilibrium density of the polymers is independent of the azimuthal coordinate. However, simulations have shown that this rotational symmetry can be broken when there are attractive interactions between the polymers. We investigate the phases that emerge in these circumstances, and we quantify the effect of the symmetry assumption on the phase behavior of the system. In the absence of this assumption, one can observe large differences in the equilibrium densities between the rotationally symmetric case and the non-rotationally symmetric case. A simple analytical model is developed that illustrates the driving thermodynamic forces responsible for this symmetry breaking. Our results have implications for the current understanding of the behavior of polymers in cylindrical nanopores.

  14. Effects of rotational symmetry breaking in polymer-coated nanopores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osmanović, D.; Hoogenboom, B. W.; Ford, I. J.

    2015-01-21

    The statistical theory of polymers tethered around the inner surface of a cylindrical channel has traditionally employed the assumption that the equilibrium density of the polymers is independent of the azimuthal coordinate. However, simulations have shown that this rotational symmetry can be broken when there are attractive interactions between the polymers. We investigate the phases that emerge in these circumstances, and we quantify the effect of the symmetry assumption on the phase behavior of the system. In the absence of this assumption, one can observe large differences in the equilibrium densities between the rotationally symmetric case and the non-rotationally symmetricmore » case. A simple analytical model is developed that illustrates the driving thermodynamic forces responsible for this symmetry breaking. Our results have implications for the current understanding of the behavior of polymers in cylindrical nanopores.« less

  15. Effects of real fluid properties on axial turbine meanline design and off-design analysis

    NASA Astrophysics Data System (ADS)

    MacLean, Cameron

    The effects of real fluid properties on axial turbine meanline analysis have been investigated employing two meanline analysis codes, namely Turbine Meanline Design (TMLD) and Turbine Meanline Off-Design (TMLO). The previously developed TMLD code assumed the working fluid was an ideal gas. Therefore it was modified to use real fluid properties. TMLO was then developed from TMLD Both codes can be run using either the ideal gas assumption or real fluid properties. TMLD was employed for the meanline design of several axial turbines for a range of inlet conditions, using both the ideal gas assumption and real fluid properties. The resulting designs were compared to see the effects of real fluid properties. Meanline designs, generated using the ideal gas assumption, were then analysed with TMLO using real fluid properties. This was done over a range of inlet conditions that correspond to varying degrees of departure from ideal gas conditions. The goal was to show how machines designed with the ideal gas assumption would perform with the real working fluid. The working fluid used in both investigations was supercritical carbon dioxide. Results from the investigation show that real fluid properties had a strong effect on the gas path areas of the turbine designs as well as the performance of turbines designed using the ideal gas assumption. Specifically, power output and the velocities of the working fluid were affected. It was found that accounting for losses tended to lessen the effects of the real fluid properties.

  16. Statistical Software Engineering

    DTIC Science & Technology

    1998-04-13

    multiversion software subject to coincident errors. IEEE Trans. Software Eng. SE-11:1511-1517. Eckhardt, D.E., A.K Caglayan, J.C. Knight, L.D. Lee, D.F...J.C. and N.G. Leveson. 1986. Experimental evaluation of the assumption of independence in multiversion software. IEEE Trans. Software

  17. On the Empirical Importance of the Conditional Skewness Assumption in Modelling the Relationship between Risk and Return

    NASA Astrophysics Data System (ADS)

    Pipień, M.

    2008-09-01

    We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.

  18. Sparse SPM: Group Sparse-dictionary learning in SPM framework for resting-state functional connectivity MRI analysis.

    PubMed

    Lee, Young-Beom; Lee, Jeonghyeon; Tak, Sungho; Lee, Kangjoo; Na, Duk L; Seo, Sang Won; Jeong, Yong; Ye, Jong Chul

    2016-01-15

    Recent studies of functional connectivity MR imaging have revealed that the default-mode network activity is disrupted in diseases such as Alzheimer's disease (AD). However, there is not yet a consensus on the preferred method for resting-state analysis. Because the brain is reported to have complex interconnected networks according to graph theoretical analysis, the independency assumption, as in the popular independent component analysis (ICA) approach, often does not hold. Here, rather than using the independency assumption, we present a new statistical parameter mapping (SPM)-type analysis method based on a sparse graph model where temporal dynamics at each voxel position are described as a sparse combination of global brain dynamics. In particular, a new concept of a spatially adaptive design matrix has been proposed to represent local connectivity that shares the same temporal dynamics. If we further assume that local network structures within a group are similar, the estimation problem of global and local dynamics can be solved using sparse dictionary learning for the concatenated temporal data across subjects. Moreover, under the homoscedasticity variance assumption across subjects and groups that is often used in SPM analysis, the aforementioned individual and group analyses using sparse dictionary learning can be accurately modeled by a mixed-effect model, which also facilitates a standard SPM-type group-level inference using summary statistics. Using an extensive resting fMRI data set obtained from normal, mild cognitive impairment (MCI), and Alzheimer's disease patient groups, we demonstrated that the changes in the default mode network extracted by the proposed method are more closely correlated with the progression of Alzheimer's disease. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Numerical Simulation of Transitional, Hypersonic Flows using a Hybrid Particle-Continuum Method

    NASA Astrophysics Data System (ADS)

    Verhoff, Ashley Marie

    Analysis of hypersonic flows requires consideration of multiscale phenomena due to the range of flight regimes encountered, from rarefied conditions in the upper atmosphere to fully continuum flow at low altitudes. At transitional Knudsen numbers there are likely to be localized regions of strong thermodynamic nonequilibrium effects that invalidate the continuum assumptions of the Navier-Stokes equations. Accurate simulation of these regions, which include shock waves, boundary and shear layers, and low-density wakes, requires a kinetic theory-based approach where no prior assumptions are made regarding the molecular distribution function. Because of the nature of these types of flows, there is much to be gained in terms of both numerical efficiency and physical accuracy by developing hybrid particle-continuum simulation approaches. The focus of the present research effort is the continued development of the Modular Particle-Continuum (MPC) method, where the Navier-Stokes equations are solved numerically using computational fluid dynamics (CFD) techniques in regions of the flow field where continuum assumptions are valid, and the direct simulation Monte Carlo (DSMC) method is used where strong thermodynamic nonequilibrium effects are present. Numerical solutions of transitional, hypersonic flows are thus obtained with increased physical accuracy relative to CFD alone, and improved numerical efficiency is achieved in comparison to DSMC alone because this more computationally expensive method is restricted to those regions of the flow field where it is necessary to maintain physical accuracy. In this dissertation, a comprehensive assessment of the physical accuracy of the MPC method is performed, leading to the implementation of a non-vacuum supersonic outflow boundary condition in particle domains, and more consistent initialization of DSMC simulator particles along hybrid interfaces. The relative errors between MPC and full DSMC results are greatly reduced as a direct result of these improvements. Next, a new parameter for detecting rotational nonequilibrium effects is proposed and shown to offer advantages over other continuum breakdown parameters, achieving further accuracy gains. Lastly, the capabilities of the MPC method are extended to accommodate multiple chemical species in rotational nonequilibrium, each of which is allowed to equilibrate independently, enabling application of the MPC method to more realistic atmospheric flows.

  20. Genetic dissection of the consensus sequence for the class 2 and class 3 flagellar promoters

    PubMed Central

    Wozniak, Christopher E.; Hughes, Kelly T.

    2008-01-01

    Summary Computational searches for DNA binding sites often utilize consensus sequences. These search models make assumptions that the frequency of a base pair in an alignment relates to the base pair’s importance in binding and presume that base pairs contribute independently to the overall interaction with the DNA binding protein. These two assumptions have generally been found to be accurate for DNA binding sites. However, these assumptions are often not satisfied for promoters, which are involved in additional steps in transcription initiation after RNA polymerase has bound to the DNA. To test these assumptions for the flagellar regulatory hierarchy, class 2 and class 3 flagellar promoters were randomly mutagenized in Salmonella. Important positions were then saturated for mutagenesis and compared to scores calculated from the consensus sequence. Double mutants were constructed to determine how mutations combined for each promoter type. Mutations in the binding site for FlhD4C2, the activator of class 2 promoters, better satisfied the assumptions for the binding model than did mutations in the class 3 promoter, which is recognized by the σ28 transcription factor. These in vivo results indicate that the activator sites within flagellar promoters can be modeled using simple assumptions but that the DNA sequences recognized by the flagellar sigma factor require more complex models. PMID:18486950

  1. Automatics adjusment on private pension fund for Asian Mathematics Conferences

    NASA Astrophysics Data System (ADS)

    Purwadi, J.

    2017-10-01

    This paper discussed about how the automatic adjustment mechanism in the pension fund with defined benefits in case conditions beyond assumptions - assumptions that have been determined. Automatic adjustment referred to in this experiment is intended to anticipate changes in economic and demographic conditions. The method discuss in this paper are indexing life expectancy. In this paper discussed about how the methods on private pension fund and how’s the impact of the change of life expectancy on benefit.

  2. Principal Score Methods: Assumptions, Extensions, and Practical Considerations

    ERIC Educational Resources Information Center

    Feller, Avi; Mealli, Fabrizia; Miratrix, Luke

    2017-01-01

    Researchers addressing posttreatment complications in randomized trials often turn to principal stratification to define relevant assumptions and quantities of interest. One approach for the subsequent estimation of causal effects in this framework is to use methods based on the "principal score," the conditional probability of belonging…

  3. Measurement-Device-Independent Quantum Cryptography

    NASA Astrophysics Data System (ADS)

    Tang, Zhiyuan

    Quantum key distribution (QKD) enables two legitimate parties to share a secret key even in the presence of an eavesdropper. The unconditional security of QKD is based on the fundamental laws of quantum physics. Original security proofs of QKD are based on a few assumptions, e.g., perfect single photon sources and perfect single-photon detectors. However, practical implementations of QKD systems do not fully comply with such assumptions due to technical limitations. The gap between theory and implementations leads to security loopholes in most QKD systems, and several attacks have been launched on sophisticated QKD systems. Particularly, the detectors have been found to be the most vulnerable part of QKD. Much effort has been put to build side-channel-free QKD systems. Solutions such as security patches and device-independent QKD have been proposed. However, the former are normally ad-hoc, and cannot close unidentified loopholes. The latter, while having the advantages of removing all assumptions on devices, is impractical to implement today. Measurement-device-independent QKD (MDI-QKD) turns out to be a promising solution to the security problem of QKD. In MDI-QKD, all security loopholes, including those yet-to-be discovered, have been removed from the detectors, the most critical part in QKD. In this thesis, we investigate issues related to the practical implementation and security of MDI-QKD. We first present a demonstration of polarization-encoding MDI-QKD. Taking finite key effect into account, we achieve a secret key rate of 0.005 bit per second (bps) over 10 km spooled telecom fiber, and a 1600-bit key is distributed. This work, together with other demonstrations, shows the practicality of MDI-QKD. Next we investigate a critical assumption of MDI-QKD: perfect state preparation. We apply the loss-tolerant QKD protocol and adapt it to MDI-QKD to quantify information leakage due to imperfect state preparation. We then present an experimental demonstration of MDI-QKD over 10 km and 40 km of spooled fiber, which for the first time considers the impact of inaccurate polarization state preparation on the secret key rate. This would not have been possible under previous security proofs, given the same amount of state preparation flaws.

  4. Influence of model assumptions about HIV disease progression after initiating or stopping treatment on estimates of infections and deaths averted by scaling up antiretroviral therapy

    PubMed Central

    Sucharitakul, Kanes; Boily, Marie-Claude; Dimitrov, Dobromir

    2018-01-01

    Background Many mathematical models have investigated the population-level impact of expanding antiretroviral therapy (ART), using different assumptions about HIV disease progression on ART and among ART dropouts. We evaluated the influence of these assumptions on model projections of the number of infections and deaths prevented by expanded ART. Methods A new dynamic model of HIV transmission among men who have sex with men (MSM) was developed, which incorporated each of four alternative assumptions about disease progression used in previous models: (A) ART slows disease progression; (B) ART halts disease progression; (C) ART reverses disease progression by increasing CD4 count; (D) ART reverses disease progression, but disease progresses rapidly once treatment is stopped. The model was independently calibrated to HIV prevalence and ART coverage data from the United States under each progression assumption in turn. New HIV infections and HIV-related deaths averted over 10 years were compared for fixed ART coverage increases. Results Little absolute difference (<7 percentage points (pp)) in HIV infections averted over 10 years was seen between progression assumptions for the same increases in ART coverage (varied between 33% and 90%) if ART dropouts reinitiated ART at the same rate as ART-naïve MSM. Larger differences in the predicted fraction of HIV-related deaths averted were observed (up to 15pp). However, if ART dropouts could only reinitiate ART at CD4<200 cells/μl, assumption C predicted substantially larger fractions of HIV infections and deaths averted than other assumptions (up to 20pp and 37pp larger, respectively). Conclusion Different disease progression assumptions on and post-ART interruption did not affect the fraction of HIV infections averted with expanded ART, unless ART dropouts only re-initiated ART at low CD4 counts. Different disease progression assumptions had a larger influence on the fraction of HIV-related deaths averted with expanded ART. PMID:29554136

  5. Deriving robust noncontextuality inequalities from algebraic proofs of the Kochen-Specker theorem: the Peres-Mermin square

    NASA Astrophysics Data System (ADS)

    Krishna, Anirudh; Spekkens, Robert W.; Wolfe, Elie

    2017-12-01

    When a measurement is compatible with each of two other measurements that are incompatible with one another, these define distinct contexts for the given measurement. The Kochen-Specker theorem rules out models of quantum theory that satisfy a particular assumption of context-independence: that sharp measurements are assigned outcomes both deterministically and independently of their context. This notion of noncontextuality is not suited to a direct experimental test because realistic measurements always have some degree of unsharpness due to noise. However, a generalized notion of noncontextuality has been proposed that is applicable to any experimental procedure, including unsharp measurements, but also preparations as well, and for which a quantum no-go result still holds. According to this notion, the model need only specify a probability distribution over the outcomes of a measurement in a context-independent way, rather than specifying a particular outcome. It also implies novel constraints of context-independence for the representation of preparations. In this article, we describe a general technique for translating proofs of the Kochen-Specker theorem into inequality constraints on realistic experimental statistics, the violation of which witnesses the impossibility of a noncontextual model. We focus on algebraic state-independent proofs, using the Peres-Mermin square as our illustrative example. Our technique yields the necessary and sufficient conditions for a particular set of correlations (between the preparations and the measurements) to admit a noncontextual model. The inequalities thus derived are demonstrably robust to noise. We specify how experimental data must be processed in order to achieve a test of these inequalities. We also provide a criticism of prior proposals for experimental tests of noncontextuality based on the Peres-Mermin square.

  6. Were the first springtails semi-aquatic? A phylogenetic approach by means of 28S rDNA and optimization alignment.

    PubMed Central

    D'Haese, Cyrille A

    2002-01-01

    Emergence from an aquatic environment to the land is one of the major evolutionary transitions within the arthropods. It is often considered that the first hexapods, and in particular the first springtails, were semi-aquatic and this assumption drives evolutionary models towards particular conclusions. To address the question of the ecological origin of the springtails, phylogenetic analyses by optimization alignment were performed on D1 and D2 regions of the 28S rDNA for 55 collembolan exemplars and eight outgroups. Relationships among the orders Symphypleona, Entomobryomorpha and Poduromorpha are inferred. More specifically, a robust hypothesis is provided for the subfamilial relationships within the order Poduromorpha. Contrary to previous statements, the semi-aquatic species Podura aquatica is not basal or 'primitive', but well nested in the Poduromorpha. The analyses performed for the 24 different weighting schemes yielded the same conclusion: semi-aquatic ecology is not ancestral for the springtails. It is a derived condition that evolved independently several times. The adaptation for semi-aquatic life is better interpreted as a step towards independence from land, rather than indication of an aquatic origin. PMID:12061958

  7. Buckets: Smart Objects for Digital Libraries

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.

    2001-01-01

    Current discussion of digital libraries (DLs) is often dominated by the merits of the respective storage, search and retrieval functionality of archives, repositories, search engines, search interfaces and database systems. While these technologies are necessary for information management, the information content is more important than the systems used for its storage and retrieval. Digital information should have the same long-term survivability prospects as traditional hardcopy information and should be protected to the extent possible from evolving search engine technologies and vendor vagaries in database management systems. Information content and information retrieval systems should progress on independent paths and make limited assumptions about the status or capabilities of the other. Digital information can achieve independence from archives and DL systems through the use of buckets. Buckets are an aggregative, intelligent construct for publishing in DLs. Buckets allow the decoupling of information content from information storage and retrieval. Buckets exist within the Smart Objects and Dumb Archives model for DLs in that many of the functionalities and responsibilities traditionally associated with archives are pushed down (making the archives dumber) into the buckets (making them smarter). Some of the responsibilities imbued to buckets are the enforcement of their terms and conditions, and maintenance and display of their contents.

  8. Tort Liability of School Districts, Officers, and Employees

    ERIC Educational Resources Information Center

    Knowles, Laurence

    1972-01-01

    Discusses the tactics of tort litigation in the public area, and outlines 1972 State and Federal court cases involving such tort liability issues as (1) governmental immunity, (2) negligence and foreseeability, (3) assumption of risk, (4) contributory negligence, and (5) independent intervening cause. (JF)

  9. MOVES2014: Heavy-duty Vehicle Emissions Report

    EPA Science Inventory

    This report updates MOVES methods for evaluating current HD diesel NOx emission rates based on comparisons to independent data from EPA’s IUVP and Houston drayage programs. The report also details methods/assumptions made for HD gasoline HC, CO and NOx emission rates using reduct...

  10. Can organizations benefit from worksite health promotion?

    PubMed Central

    Leviton, L C

    1989-01-01

    A decision-analytic model was developed to project the future effects of selected worksite health promotion activities on employees' likelihood of chronic disease and injury and on employer costs due to illness. The model employed a conservative set of assumptions and a limited five-year time frame. Under these assumptions, hypertension control and seat belt campaigns prevent a substantial amount of illness, injury, and death. Sensitivity analysis indicates that these two programs pay for themselves and under some conditions show a modest savings to the employer. Under some conditions, smoking cessation programs pay for themselves, preventing a modest amount of illness and death. Cholesterol reduction by behavioral means does not pay for itself under these assumptions. These findings imply priorities in prevention for employer and employee alike. PMID:2499556

  11. Zipf’s word frequency law in natural language: A critical review and future directions

    PubMed Central

    2014-01-01

    The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf ’ s law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf’s law and are then used to evaluate many of the theoretical explanations of Zipf’s law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf’s law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data. PMID:24664880

  12. Model-independent comparison of annual modulation and total rate with direct detection experiments

    NASA Astrophysics Data System (ADS)

    Kahlhoefer, Felix; Reindl, Florian; Schäffner, Karoline; Schmidt-Hoberg, Kai; Wild, Sebastian

    2018-05-01

    The relative sensitivity of different direct detection experiments depends sensitively on the astrophysical distribution and particle physics nature of dark matter, prohibiting a model-independent comparison. The situation changes fundamentally if two experiments employ the same target material. We show that in this case one can compare measurements of an annual modulation and exclusion bounds on the total rate while making no assumptions on astrophysics and no (or only very general) assumptions on particle physics. In particular, we show that the dark matter interpretation of the DAMA/LIBRA signal can be conclusively tested with COSINUS, a future experiment employing the same target material. We find that if COSINUS excludes a dark matter scattering rate of about 0.01 kg‑1 days‑1 with an energy threshold of 1.8 keV and resolution of 0.2 keV, it will rule out all explanations of DAMA/LIBRA in terms of dark matter scattering off sodium and/or iodine.

  13. Survival estimation and the effects of dependency among animals

    USGS Publications Warehouse

    Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.

    1995-01-01

    Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.

  14. Spatial frequency dependence of target signature for infrared performance modeling

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd; Olson, Jeffrey

    2011-05-01

    The standard model used to describe the performance of infrared imagers is the U.S. Army imaging system target acquisition model, based on the targeting task performance metric. The model is characterized by the resolution and sensitivity of the sensor as well as the contrast and task difficulty of the target set. The contrast of the target is defined as a spatial average contrast. The model treats the contrast of the target set as spatially white, or constant, over the bandlimit of the sensor. Previous experiments have shown that this assumption is valid under normal conditions and typical target sets. However, outside of these conditions, the treatment of target signature can become the limiting factor affecting model performance accuracy. This paper examines target signature more carefully. The spatial frequency dependence of the standard U.S. Army RDECOM CERDEC Night Vision 12 and 8 tracked vehicle target sets is described. The results of human perception experiments are modeled and evaluated using both frequency dependent and independent target signature definitions. Finally the function of task difficulty and its relationship to a target set is discussed.

  15. RG-invariant sum rule in a generalization of anomaly-mediated SUSY-breaking models

    NASA Astrophysics Data System (ADS)

    Carena, Marcela; Huitu, Katri; Kobayashi, Tatsuo

    2001-01-01

    We study a generalization of anomaly-mediated supersymmetry-breaking (AMSB) scenarios, under the assumption that the effects of the high-scale theory do not completely decouple and that D-term type contributions can therefore be present. We investigate the effect of such possible D-term additional contributions to soft scalar masses by requiring that, for non-vanishing, renormalizable Yukawa couplings Yijk, the sum of squared soft supersymmetry breaking mass parameters, M2ijk≡mi2+mj2+mk2, is RG-invariant, in the sense that it becomes independent of the specific ultraviolet boundary conditions as it occurs in the AMSB models. This type of models can avoid the problem of tachyonic solutions for the slepton mass spectrum present in AMSB scenarios. We implement the electroweak symmetry breaking condition and explore the sparticle spectrum associated with this framework. To show the possible diversity of the sparticle spectrum, we consider two examples, one in which the D-terms induce a common soft supersymmetry breaking mass term for all sfermion masses, and another one in which a light stop can be present in the spectrum.

  16. Communication Optimal Parallel Multiplication of Sparse Random Matrices

    DTIC Science & Technology

    2013-02-21

    Definition 2.1), and (2) the algorithm is sparsity- independent, where the computation is statically partitioned to processors independent of the sparsity...struc- ture of the input matrices (see Definition 2.5). The second assumption applies to nearly all existing al- gorithms for general sparse matrix-matrix...where A and B are n× n ER(d) matrices: Definition 2.1 An ER(d) matrix is an adjacency matrix of an Erdős-Rényi graph with parameters n and d/n. That

  17. Is Seismically Determined Q an Intrinsic Material Property?

    NASA Astrophysics Data System (ADS)

    Langston, C. A.

    2003-12-01

    The seismic quality factor, Q, has a well-defined physical meaning as an intrinsic material property associated with a visco-elastic or a non-linear stress-strain constitutive relation for a material. Measurement of Q from seismic waves, however, involves interpreting seismic wave amplitude and phase as deviations from some ideal elastic wave propagation model. Thus, assumptions in the elastic wave propagation model become the basis for attributing anelastic properties to the earth continuum. Scientifically, the resulting Q model derived from seismic data is no more than a hypothesis that needs to be verified by other independent experiments concerning the continuum constitutive law and through careful examination of the truth of the assumptions in the wave propagation model. A case in point concerns the anelasticity of Mississippi embayment sediments in the central U.S. that has important implications for evaluation of earthquake strong ground motions. Previous body wave analyses using converted Sp phases have suggested that Qs is ~30 in the sediments based on simple ray theory assumptions. However, detailed modeling of 1D heterogeneity in the sediments shows that Qs cannot be resolved by the Sp data. An independent experiment concerning the amplitude decay of surface waves propagating in the sediments shows that Qs must be generally greater than 80 but is also subject to scattering attenuation. Apparent Q effects seen in direct P and S waves can also be produced by wave tunneling mechanisms in relatively simple 1D heterogeneity. Heterogeneity is a general geophysical attribute of the earth as shown by many high-resolution data sets and should be used as the first litmus test on assumptions made in seismic Q studies before a Q model can be interpreted as an intrinsic material property.

  18. DLVO Approximation Methods for Predicting the Attachment of Silver Nanoparticles to Ceramic Membranes.

    PubMed

    Mikelonis, Anne M; Youn, Sungmin; Lawler, Desmond F

    2016-02-23

    This article examines the influence of three common stabilizing agents (citrate, poly(vinylpyrrolidone) (PVP), and branched poly(ethylenimine) (BPEI)) on the attachment affinity of silver nanoparticles to ceramic water filters. Citrate-stabilized silver nanoparticles were found to have the highest attachment affinity (under conditions in which the surface potential was of opposite sign to the filter). This work demonstrates that the interaction between the electrical double layers plays a critical role in the attachment of nanoparticles to flat surfaces and, in particular, that predictions of double-layer interactions are sensitive to boundary condition assumptions (constant charge vs constant potential). The experimental deposition results can be explained when using different boundary condition assumptions for different stabilizing molecules but not when the same assumption was assumed for all three types of particles. The integration of steric interactions can also explain the experimental deposition results. Particle size was demonstrated to have an effect on the predicted deposition for BPEI-stabilized particles but not for PVP.

  19. The role of cloud contamination, aerosol layer height and aerosol model in the assessment of the OMI near-UV retrievals over the ocean

    NASA Astrophysics Data System (ADS)

    Gassó, Santiago; Torres, Omar

    2016-07-01

    Retrievals of aerosol optical depth (AOD) at 388 nm over the ocean from the Ozone Monitoring Instrument (OMI) two-channel near-UV algorithm (OMAERUV) have been compared with independent AOD measurements. The analysis was carried out over the open ocean (OMI and MODerate-resolution Imaging Spectrometer (MODIS) AOD comparisons) and over coastal and island sites (OMI and AERONET, the AErosol RObotic NETwork). Additionally, a research version of the retrieval algorithm (using MODIS and CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) information as constraints) was utilized to evaluate the sensitivity of the retrieval to different assumed aerosol properties. Overall, the comparison resulted in differences (OMI minus independent measurements) within the expected levels of uncertainty for the OMI AOD retrievals (0.1 for AOD < 0.3, 30 % for AOD > 0.3). Using examples from case studies with outliers, the reasons that led to the observed differences were examined with specific purpose to determine whether they are related to instrument limitations (i.e., pixel size, calibration) or algorithm assumptions (such as aerosol shape, aerosol height). The analysis confirms that OMAERUV does an adequate job at rejecting cloudy scenes within the instrument's capabilities. There is a residual cloud contamination in OMI pixels with quality flag 0 (the best conditions for aerosol retrieval according to the algorithm), resulting in a bias towards high AODs in OMAERUV. This bias is more pronounced at low concentrations of absorbing aerosols (AOD 388 nm ˜ < 0.5). For higher aerosol loadings, the bias remains within OMI's AOD uncertainties. In pixels where OMAERUV assigned a dust aerosol model, a fraction of them (< 20 %) had retrieved AODs significantly lower than AERONET and MODIS AODs. In a case study, a detailed examination of the aerosol height from CALIOP and the AODs from MODIS, along with sensitivity tests, was carried out by varying the different assumed parameters in the retrieval (imaginary index of refraction, size distribution, aerosol height, particle shape). It was found that the spherical shape assumption for dust in the current retrieval is the main cause of the underestimate. In addition, it is demonstrated in an example how an incorrect assumption of the aerosol height can lead to an underestimate. Nevertheless, this is not as significant as the effect of particle shape. These findings will be incorporated in a future version of the retrieval algorithm.

  20. Osmotic Transport across Cell Membranes in Nondilute Solutions: A New Nondilute Solute Transport Equation

    PubMed Central

    Elmoazzen, Heidi Y.; Elliott, Janet A.W.; McGann, Locksley E.

    2009-01-01

    The fundamental physical mechanisms of water and solute transport across cell membranes have long been studied in the field of cell membrane biophysics. Cryobiology is a discipline that requires an understanding of osmotic transport across cell membranes under nondilute solution conditions, yet many of the currently-used transport formalisms make limiting dilute solution assumptions. While dilute solution assumptions are often appropriate under physiological conditions, they are rarely appropriate in cryobiology. The first objective of this article is to review commonly-used transport equations, and the explicit and implicit assumptions made when using the two-parameter and the Kedem-Katchalsky formalisms. The second objective of this article is to describe a set of transport equations that do not make the previous dilute solution or near-equilibrium assumptions. Specifically, a new nondilute solute transport equation is presented. Such nondilute equations are applicable to many fields including cryobiology where dilute solution conditions are not often met. An illustrative example is provided. Utilizing suitable transport equations that fit for two permeability coefficients, fits were as good as with the previous three-parameter model (which includes the reflection coefficient, σ). There is less unexpected concentration dependence with the nondilute transport equations, suggesting that some of the unexpected concentration dependence of permeability is due to the use of inappropriate transport equations. PMID:19348741

  1. Ecohydrology of a resource-conserving semiarid woodland: Effects of scale and disturbance

    USGS Publications Warehouse

    Wilcox, B.P.; Breshears, D.D.; Allen, Craig D.

    2003-01-01

    In semiarid landscapes, the linkage between runoff and vegetation is a particularly close one. In this paper we report on the results of a long-term and multiple-scale study of interactions between runoff, erosion, and vegetation in a piñon–juniper woodland in New Mexico. We use our results to address three knowledge gaps: (1) the temporal scaling relationships between precipitation and runoff; (2) the effects of spatial scale on runoff and erosion, as influenced by vegetation; and (3) the influence of disturbance on these relationships. On the basis of our results, we tested three assumptions that represent current thinking in these areas (as evidenced, for example, by explicit or implicit assumptions embedded in commonly used models). The first assumption, that aggregated precipitation can be used as a surrogate for total runoff in semiarid environments, was not verified by our findings. We found that when runoff is generated mainly by overland flow in these systems, aggregated precipitation amounts alone (by year, season, or individual event) are a poor predictor of runoff amounts. The second assumption, that at the hillslope and smaller scales runoff and erosion are independent of spatial scale, was likewise not verified. We found that the redistribution of water and sediment within the hillslope was substantial and that there was a strong and nonlinear reduction in unit-area runoff and erosion with increasing scale (our scales were slope lengths ranging from 1 m to 105 m). The third assumption, that disturbance-related increases in runoff and erosion remain constant with time, was partially verified. We found that for low-slope-gradient sites, disturbance led to accelerated runoff and erosion, and these conditions may persist for a decade or longer. On the basis of our findings, we further suggest that (a) disturbance alters the effects of scale on runoff and erosion in a predictable way—scale relationships in degraded areas will be fundamentally different from those in nondegraded areas because more runoff will escape off site and erosion rates will be much higher; and (b) there exists a slope threshold, below which semiarid landscapes will eventually recover following disturbance and above which there will be no recovery without mitigation or remediation.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roldan, Omar; Quartin, Miguel; Notari, Alessio, E-mail: oaroldan@if.ufrj.br, E-mail: notari@ffn.ub.es, E-mail: mquartin@if.ufrj.br

    The aberration and Doppler coupling effects of the Cosmic Microwave Background (CMB) were recently measured by the Planck satellite. The most straightforward interpretation leads to a direct detection of our peculiar velocity β, consistent with the measurement of the well-known dipole. In this paper we discuss the assumptions behind such interpretation. We show that Doppler-like couplings appear from two effects: our peculiar velocity and a second order large-scale effect due to the dipolar part of the gravitational potential. We find that the two effects are exactly degenerate but only if we assume second-order initial conditions from single-field Inflation. Thus, detectingmore » a discrepancy in the value of β from the dipole and the Doppler couplings implies the presence of a primordial non-Gaussianity. We also show that aberration-like signals likewise arise from two independent effects: our peculiar velocity and lensing due to a first order large-scale dipolar gravitational potential, independently on Gaussianity of the initial conditions. In general such effects are not degenerate and so a discrepancy between the measured β from the dipole and aberration could be accounted for by a dipolar gravitational potential. Only through a fine-tuning of the radial profile of the potential it is possible to have a complete degeneracy with a boost effect. Finally we discuss that we also expect other signatures due to integrated second order terms, which may be further used to disentangle this scenario from a simple boost.« less

  3. Conditional random matrix ensembles and the stability of dynamical systems

    NASA Astrophysics Data System (ADS)

    Kirk, Paul; Rolando, Delphine M. Y.; MacLean, Adam L.; Stumpf, Michael P. H.

    2015-08-01

    Random matrix theory (RMT) has found applications throughout physics and applied mathematics, in subject areas as diverse as communications networks, population dynamics, neuroscience, and models of the banking system. Many of these analyses exploit elegant analytical results, particularly the circular law and its extensions. In order to apply these results, assumptions must be made about the distribution of matrix elements. Here we demonstrate that the choice of matrix distribution is crucial. In particular, adopting an unrealistic matrix distribution for the sake of analytical tractability is liable to lead to misleading conclusions. We focus on the application of RMT to the long-standing, and at times fractious, ‘diversity-stability debate’, which is concerned with establishing whether large complex systems are likely to be stable. Early work (and subsequent elaborations) brought RMT to bear on the debate by modelling the entries of a system’s Jacobian matrix as independent and identically distributed (i.i.d.) random variables. These analyses were successful in yielding general results that were not tied to any specific system, but relied upon a restrictive i.i.d. assumption. Other studies took an opposing approach, seeking to elucidate general principles of stability through the analysis of specific systems. Here we develop a statistical framework that reconciles these two contrasting approaches. We use a range of illustrative dynamical systems examples to demonstrate that: (i) stability probability cannot be summarily deduced from any single property of the system (e.g. its diversity); and (ii) our assessment of stability depends on adequately capturing the details of the systems analysed. Failing to condition on the structure of dynamical systems will skew our analysis and can, even for very small systems, result in an unnecessarily pessimistic diagnosis of their stability.

  4. A comprehensive analysis of the evaporation of a liquid spherical drop.

    PubMed

    Sobac, B; Talbot, P; Haut, B; Rednikov, A; Colinet, P

    2015-01-15

    In this paper, a new comprehensive analysis of a suspended drop of a pure liquid evaporating into air is presented. Based on mass and energy conservation equations, a quasi-steady model is developed including diffusive and convective transports, and considering the non-isothermia of the gas phase. The main original feature of this simple analytical model lies in the consideration of the local dependence of the physico-chemical properties of the gas on the gas temperature, which has a significant influence on the evaporation process at high temperatures. The influence of the atmospheric conditions on the interfacial evaporation flux, molar fraction and temperature is investigated. Simplified versions of the model are developed to highlight the key mechanisms governing the evaporation process. For the conditions considered in this work, the convective transport appears to be opposed to the evaporation process leading to a decrease of the evaporation flux. However, this effect is relatively limited, the Péclet numbers happening to be small. In addition, the gas isothermia assumption never appears to be valid here, even at room temperature, due to the large temperature gradient that develops in the gas phase. These two conclusions are explained by the fact that heat transfer from the gas to the liquid appears to be the step limiting the evaporation process. Regardless of the complexity of the developed model, yet excluding extremely small droplets, the square of the drop radius decreases linearly over time (R(2) law). The assumptions of the model are rigorously discussed and general criteria are established, independently of the liquid-gas couple considered. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Generalised filtering and stochastic DCM for fMRI.

    PubMed

    Li, Baojuan; Daunizeau, Jean; Stephan, Klaas E; Penny, Will; Hu, Dewen; Friston, Karl

    2011-09-15

    This paper is about the fitting or inversion of dynamic causal models (DCMs) of fMRI time series. It tries to establish the validity of stochastic DCMs that accommodate random fluctuations in hidden neuronal and physiological states. We compare and contrast deterministic and stochastic DCMs, which do and do not ignore random fluctuations or noise on hidden states. We then compare stochastic DCMs, which do and do not ignore conditional dependence between hidden states and model parameters (generalised filtering and dynamic expectation maximisation, respectively). We first characterise state-noise by comparing the log evidence of models with different a priori assumptions about its amplitude, form and smoothness. Face validity of the inversion scheme is then established using data simulated with and without state-noise to ensure that DCM can identify the parameters and model that generated the data. Finally, we address construct validity using real data from an fMRI study of internet addiction. Our analyses suggest the following. (i) The inversion of stochastic causal models is feasible, given typical fMRI data. (ii) State-noise has nontrivial amplitude and smoothness. (iii) Stochastic DCM has face validity, in the sense that Bayesian model comparison can distinguish between data that have been generated with high and low levels of physiological noise and model inversion provides veridical estimates of effective connectivity. (iv) Relaxing conditional independence assumptions can have greater construct validity, in terms of revealing group differences not disclosed by variational schemes. Finally, we note that the ability to model endogenous or random fluctuations on hidden neuronal (and physiological) states provides a new and possibly more plausible perspective on how regionally specific signals in fMRI are generated. Copyright © 2011. Published by Elsevier Inc.

  6. Large-Scale Experiments in Microbially Induced Calcite Precipitation (MICP): Reactive Transport Model Development and Prediction

    NASA Astrophysics Data System (ADS)

    Nassar, Mohamed K.; Gurung, Deviyani; Bastani, Mehrdad; Ginn, Timothy R.; Shafei, Babak; Gomez, Michael G.; Graddy, Charles M. R.; Nelson, Doug C.; DeJong, Jason T.

    2018-01-01

    Design of in situ microbially induced calcite precipitation (MICP) strategies relies on a predictive capability. To date much of the mathematical modeling of MICP has focused on small-scale experiments and/or one-dimensional flow in porous media, and successful parameterizations of models in these settings may not pertain to larger scales or to nonuniform, transient flows. Our objective in this article is to report on modeling to test our ability to predict behavior of MICP under controlled conditions in a meter-scale tank experiment with transient nonuniform transport in a natural soil, using independently determined parameters. Flow in the tank was controlled by three wells, via a complex cycle of injection/withdrawals followed by no-flow intervals. Different injection solution recipes were used in sequence for transport characterization, biostimulation, cementation, and groundwater rinse phases of the 17 day experiment. Reaction kinetics were calibrated using separate column experiments designed with a similar sequence of phases. This allowed for a parsimonious modeling approach with zero fitting parameters for the tank experiment. These experiments and data were simulated using PHT3-D, involving transient nonuniform flow, alternating low and high Damköhler reactive transport, and combined equilibrium and kinetically controlled biogeochemical reactions. The assumption that microbes mediating the reaction were exclusively sessile, and with constant activity, in conjunction with the foregoing treatment of the reaction network, provided for efficient and accurate modeling of the entire process leading to nonuniform calcite precipitation. This analysis suggests that under the biostimulation conditions applied here the assumption of steady state sessile biocatalyst suffices to describe the microbially mediated calcite precipitation.

  7. MIX: a computer program to evaluate interaction between chemicals

    Treesearch

    Jacqueline L. Robertson; Kimberly C. Smith

    1989-01-01

    A computer program, MIX, was designed to identify pairs of chemicals whose interaction results in a response that departs significantly from the model predicated on the assumption of independent, uncorrelated joint action. This report describes the MIX program, its statistical basis, and instructions for its use.

  8. Managing Clustered Data Using Hierarchical Linear Modeling

    ERIC Educational Resources Information Center

    Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.

    2012-01-01

    Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…

  9. The State-Conspiracy Model of Political Socialization.

    ERIC Educational Resources Information Center

    Homan, Roger

    1980-01-01

    This essay reviews the political socialization literature and the assumptions underlying the theory that political socialization, formal and informal, is sponsored by the state to reinforce its stability and social order. The author argues that family influence and enlightened self-interest are strong, independent factors in personal political…

  10. Network Framing of Pest Management Knowledge and Practice

    ERIC Educational Resources Information Center

    Moore, Keith M.

    2008-01-01

    Conventional technology transfer is based on the assumption that autonomous individuals independently make behavioral decisions. In contrast, Actor-Network Theory (ANT) suggests that people and technologies are interconnected in ways that reinforce and reproduce some types of knowledge and consequent behavioral practices, but not others. Research…

  11. Active Spread-Spectrum Steganalysis for Hidden Data Extraction

    DTIC Science & Technology

    2011-09-01

    steganalysis. In particular, we aim to recover blindly se- cret data hidden in image hosts via (multi-signature) direct- sequence SS embedding [18]-[25...access (CDMA) communica- tion systems. Under the assumption that the embedded se- cret messages are independent identically distributed (i.i.d.) random

  12. General Nature of Multicollinearity in Multiple Regression Analysis.

    ERIC Educational Resources Information Center

    Liu, Richard

    1981-01-01

    Discusses multiple regression, a very popular statistical technique in the field of education. One of the basic assumptions in regression analysis requires that independent variables in the equation should not be highly correlated. The problem of multicollinearity and some of the solutions to it are discussed. (Author)

  13. Consistency tests for the extraction of the Boer-Mulders and Sivers functions

    NASA Astrophysics Data System (ADS)

    Christova, E.; Leader, E.; Stoilov, M.

    2018-03-01

    At present, the Boer-Mulders (BM) function for a given quark flavor is extracted from data on semi-inclusive deep inelastic scattering (SIDIS) using the simplifying assumption that it is proportional to the Sivers function for that flavor. In a recent paper, we suggested that the consistency of this assumption could be tested using information on so-called difference asymmetries i.e. the difference between the asymmetries in the production of particles and their antiparticles. In this paper, using the SIDIS COMPASS deuteron data on the ⟨cos ϕh⟩ , ⟨cos 2 ϕh⟩ and Sivers difference asymmetries, we carry out two independent consistency tests of the assumption of proportionality, but here applied to the sum of the valence-quark contributions. We find that such an assumption is compatible with the data. We also show that the proportionality assumptions made in the existing parametrizations of the BM functions are not compatible with our analysis, which suggests that the published results for the Boer-Mulders functions for individual flavors are unreliable. The ⟨cos ϕh⟩ and ⟨cos 2 ϕh⟩ asymmetries receive contributions also from the, in principle, calculable Cahn effect. We succeed in extracting the Cahn contributions from experiment (we believe for the first time) and compare with their calculated values, with interesting implications.

  14. Progressive approach to eruption at Campi Flegrei caldera in southern Italy

    NASA Astrophysics Data System (ADS)

    Kilburn, Christopher R. J.; de Natale, Giuseppe; Carlino, Stefano

    2017-05-01

    Unrest at large calderas rarely ends in eruption, encouraging vulnerable communities to perceive emergency warnings of volcanic activity as false alarms. A classic example is the Campi Flegrei caldera in southern Italy, where three episodes of major uplift since 1950 have raised its central district by about 3 m without an eruption. Individual episodes have conventionally been treated as independent events, so that only data from an ongoing episode are considered pertinent to evaluating eruptive potential. An implicit assumption is that the crust relaxes accumulated stress after each episode. Here we apply a new model of elastic-brittle failure to test the alternative view that successive episodes promote a long-term accumulation of stress in the crust. The results provide the first quantitative evidence that Campi Flegrei is evolving towards conditions more favourable to eruption and identify field tests for predictions on how the caldera will behave during future unrest.

  15. Treatment for preschool children with interpersonal sexual behavior problems: a pilot study.

    PubMed

    Silovsky, Jane F; Niec, Larissa; Bard, David; Hecht, Debra B

    2007-01-01

    This pilot study evaluated a 12-week group treatment program for preschool children with interpersonal sexual behavior problems (SBP; N = 85; 53 completed at least 8 sessions). Many children presented with co-occurring trauma symptoms and disruptive behaviors. In intent-to-treat analysis, a significant linear reduction in SBP due to number of treatment sessions attended was found, an effect that was independent of linear reductions affiliated with elapsed time. Under the assumption that treatment can have an incremental impact, more than one third of the variance was accounted for by treatment effects, with female and older children most favorably impacted. Caregivers reported increase in knowledge, satisfaction, and usefulness of treatment. In addition to replication, future research is needed to examine (a) effects of environment change and time on SBP, (b) stability of treatment effects, and (c) best practices to integrate evidence-based treatments for comorbid conditions.

  16. Progressive approach to eruption at Campi Flegrei caldera in southern Italy

    PubMed Central

    Kilburn, Christopher R.J.; De Natale, Giuseppe; Carlino, Stefano

    2017-01-01

    Unrest at large calderas rarely ends in eruption, encouraging vulnerable communities to perceive emergency warnings of volcanic activity as false alarms. A classic example is the Campi Flegrei caldera in southern Italy, where three episodes of major uplift since 1950 have raised its central district by about 3 m without an eruption. Individual episodes have conventionally been treated as independent events, so that only data from an ongoing episode are considered pertinent to evaluating eruptive potential. An implicit assumption is that the crust relaxes accumulated stress after each episode. Here we apply a new model of elastic-brittle failure to test the alternative view that successive episodes promote a long-term accumulation of stress in the crust. The results provide the first quantitative evidence that Campi Flegrei is evolving towards conditions more favourable to eruption and identify field tests for predictions on how the caldera will behave during future unrest. PMID:28504261

  17. Psychological needs and the facilitation of integrative processes.

    PubMed

    Ryan, R M

    1995-09-01

    The assumption that there are innate integrative or actualizing tendencies underlying personality and social development is reexamined. Rather than viewing such processes as either nonexistent or as automatic, I argue that they are dynamic and dependent upon social-contextual supports pertaining to basic human psychological needs. To develop this viewpoint, I conceptually link the notion of integrative tendencies to specific developmental processes, namely intrinsic motivation; internalization; and emotional integration. These processes are then shown to be facilitated by conditions that fulfill psychological needs for autonomy, competence, and relatedness, and forestalled within contexts that frustrate these needs. Interactions between psychological needs and contextual supports account, in part, for the domain and situational specificity of motivation, experience, and relative integration. The meaning of psychological needs (vs. wants) is directly considered, as are the relations between concepts of integration and autonomy and those of independence, individualism, efficacy, and cognitive models of "multiple selves."

  18. Progressive approach to eruption at Campi Flegrei caldera in southern Italy.

    PubMed

    Kilburn, Christopher R J; De Natale, Giuseppe; Carlino, Stefano

    2017-05-15

    Unrest at large calderas rarely ends in eruption, encouraging vulnerable communities to perceive emergency warnings of volcanic activity as false alarms. A classic example is the Campi Flegrei caldera in southern Italy, where three episodes of major uplift since 1950 have raised its central district by about 3 m without an eruption. Individual episodes have conventionally been treated as independent events, so that only data from an ongoing episode are considered pertinent to evaluating eruptive potential. An implicit assumption is that the crust relaxes accumulated stress after each episode. Here we apply a new model of elastic-brittle failure to test the alternative view that successive episodes promote a long-term accumulation of stress in the crust. The results provide the first quantitative evidence that Campi Flegrei is evolving towards conditions more favourable to eruption and identify field tests for predictions on how the caldera will behave during future unrest.

  19. Violations of Assumptions in School-Based Single-Case Data: Implications for the Selection and Interpretation of Effect Sizes.

    PubMed

    Solomon, Benjamin George

    2014-07-01

    A wide variety of effect sizes (ESs) has been used in the single-case design literature. Several researchers have "stress tested" these ESs by subjecting them to various degrees of problem data (e.g., autocorrelation, slope), resulting in the conditions by which different ESs can be considered valid. However, on the back end, few researchers have considered how prevalent and severe these problems are in extant data and as a result, how concerned applied researchers should be. The current study extracted and aggregated indicators of violations of normality and independence across four domains of educational study. Significant violations were found in total and across fields, including low levels of autocorrelation and moderate levels of absolute trend. These violations affect the selection and interpretation of ESs at the individual study level and for meta-analysis. Implications and recommendations are discussed. © The Author(s) 2013.

  20. The neural basis of attaining conscious awareness of sad mood.

    PubMed

    Smith, Ryan; Braden, B Blair; Chen, Kewei; Ponce, Francisco A; Lane, Richard D; Baxter, Leslie C

    2015-09-01

    The neural processes associated with becoming aware of sad mood are not fully understood. We examined the dynamic process of becoming aware of sad mood and recovery from sad mood. Sixteen healthy subjects underwent fMRI while participating in a sadness induction task designed to allow for variable mood induction times. Individualized regressors linearly modeled the time periods during the attainment of self-reported sad and baseline "neutral" mood states, and the validity of the linearity assumption was further tested using independent component analysis. During sadness induction the dorsomedial and ventrolateral prefrontal cortices, and anterior insula exhibited a linear increase in the blood oxygen level-dependent (BOLD) signal until subjects became aware of a sad mood and then a subsequent linear decrease as subjects transitioned from sadness back to the non-sadness baseline condition. These findings extend understanding of the neural basis of conscious emotional experience.

  1. Net community production in the bottom of first-year sea ice over the Arctic spring bloom

    NASA Astrophysics Data System (ADS)

    Campbell, K.; Mundy, C. J.; Gosselin, M.; Landy, J. C.; Delaforge, A.; Rysgaard, S.

    2017-09-01

    The balance of photosynthesis and respiration by organisms like algae and bacteria determines whether sea ice is net heterotrophic or autotrophic. In turn this clarifies the influence of microbes on atmosphere-ice-ocean gas fluxes and their contribution to the trophic system. In this study we define two phases of the spring bloom based on bottom ice net community production and algal growth. Phase I was characterized by limited algal accumulation and low productivity, which at times resulted in net heterotrophy. Greater productivity in Phase II drove rapid algal accumulation that consistently produced net autotrophic conditions. The different phases were associated with seasonal shifts in light availability and species dominance. Results from this study demonstrate the importance of community respiration on spring productivity, as respiration rates can maintain a heterotrophic state independent of algal growth. This challenges previous assumptions of a fully autotrophic sea ice community during the ice-covered spring.

  2. The impact of multiple endpoint dependency on Q and I(2) in meta-analysis.

    PubMed

    Thompson, Christopher Glen; Becker, Betsy Jane

    2014-09-01

    A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures Q and I(2) in scenarios using the unbiased standardized-mean-difference effect size. Univariate and multivariate meta-analysis methods are examined. Conditions included different overall outcome effects, study sample sizes, numbers of studies, between-outcomes correlations, dependency structures, and ways of computing the correlation. The univariate approach used typical fixed-effects analyses whereas the multivariate approach used generalized least-squares (GLS) estimates of a fixed-effects model, weighted by the inverse variance-covariance matrix. Increased dependence among effect sizes led to increased Type I error rates from univariate models. When effect sizes were strongly dependent, error rates were drastically higher than nominal levels regardless of study sample size and number of studies. In contrast, using GLS estimation to account for multiple-endpoint dependency maintained error rates within nominal levels. Conversely, mean I(2) values were not greatly affected by increased amounts of dependency. Last, we point out that the between-outcomes correlation should be estimated as a pooled within-groups correlation rather than using a full-sample estimator that does not consider treatment/control group membership. Copyright © 2014 John Wiley & Sons, Ltd.

  3. The structured ancestral selection graph and the many-demes limit.

    PubMed

    Slade, Paul F; Wakeley, John

    2005-02-01

    We show that the unstructured ancestral selection graph applies to part of the history of a sample from a population structured by restricted migration among subpopulations, or demes. The result holds in the limit as the number of demes tends to infinity with proportionately weak selection, and we have also made the assumptions of island-type migration and that demes are equivalent in size. After an instantaneous sample-size adjustment, this structured ancestral selection graph converges to an unstructured ancestral selection graph with a mutation parameter that depends inversely on the migration rate. In contrast, the selection parameter for the population is independent of the migration rate and is identical to the selection parameter in an unstructured population. We show analytically that estimators of the migration rate, based on pairwise sequence differences, derived under the assumption of neutrality should perform equally well in the presence of weak selection. We also modify an algorithm for simulating genealogies conditional on the frequencies of two selected alleles in a sample. This permits efficient simulation of stronger selection than was previously possible. Using this new algorithm, we simulate gene genealogies under the many-demes ancestral selection graph and identify some situations in which migration has a strong effect on the time to the most recent common ancestor of the sample. We find that a similar effect also increases the sensitivity of the genealogy to selection.

  4. Moral reasoning and personality traits.

    PubMed

    Mudrack, Peter E

    2006-06-01

    Moral reasoning should not be clearly associated with measures of personality traits. Although this assumption pervades the moral reasoning literature, it may not always be true. This paper provides evidence that moral reasoning, as assessed with P scores of the Defining Issues Test, is indeed positively associated with five traits from the California Psychological Inventory: Achievement via Independence, Intellectual Efficiency, Tolerance, Responsibility, and Capacity for Status. Such relationships make conceptual sense, shed light on the meaning and implications of moral reasoning, call into question prevailing assumptions in the literature, and may encourage investigators to broaden the types of research questions asked in the context of moral reasoning.

  5. Response moderation models for conditional dependence between response time and response accuracy.

    PubMed

    Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan

    2017-05-01

    It is becoming more feasible and common to register response times in the application of psychometric tests. Researchers thus have the opportunity to jointly model response accuracy and response time, which provides users with more relevant information. The most common choice is to use the hierarchical model (van der Linden, 2007, Psychometrika, 72, 287), which assumes conditional independence between response time and accuracy, given a person's speed and ability. However, this assumption may be violated in practice if, for example, persons vary their speed or differ in their response strategies, leading to conditional dependence between response time and accuracy and confounding measurement. We propose six nested hierarchical models for response time and accuracy that allow for conditional dependence, and discuss their relationship to existing models. Unlike existing approaches, the proposed hierarchical models allow for various forms of conditional dependence in the model and allow the effect of continuous residual response time on response accuracy to be item-specific, person-specific, or both. Estimation procedures for the models are proposed, as well as two information criteria that can be used for model selection. Parameter recovery and usefulness of the information criteria are investigated using simulation, indicating that the procedure works well and is likely to select the appropriate model. Two empirical applications are discussed to illustrate the different types of conditional dependence that may occur in practice and how these can be captured using the proposed hierarchical models. © 2016 The British Psychological Society.

  6. Modeling thermal infrared (2-14 micrometer) reflectance spectra of frost and snow

    NASA Technical Reports Server (NTRS)

    Wald, Andrew E.

    1994-01-01

    Existing theories of radiative transfer in close-packed media assume that each particle scatters independently of its neighbors. For opaque particles, such as are common in the thermal infrared, this assumption is not valid, and these radiative transfer theories will not be accurate. A new method is proposed, called 'diffraction subtraction', which modifies the scattering cross section of close-packed large, opaque spheres to account for the effect of close packing on the diffraction cross section of a scattering particle. This method predicts the thermal infrared reflectance of coarse (greater than 50 micrometers radius), disaggregated granular snow. However, such coarse snow is typically old and metamorphosed, with adjacent grains welded together. The reflectance of such a welded block can be described as partly Fresnel in nature and cannot be predicted using Mie inputs to radiative transfer theory. Owing to the high absorption coefficient of ice in the thermal infrared, a rough surface reflectance model can be used to calculate reflectance from such a block. For very small (less than 50 micrometers), disaggregated particles, it is incorrect in principle to treat diffraction independently of reflection and refraction, and the theory fails. However, for particles larger than 50 micrometers, independent scattering is a valid assumption, and standard radiative transfer theory works.

  7. Robust Tests for Additive Gene-Environment Interaction in Case-Control Studies Using Gene-Environment Independence.

    PubMed

    Liu, Gang; Mukherjee, Bhramar; Lee, Seunggeun; Lee, Alice W; Wu, Anna H; Bandera, Elisa V; Jensen, Allan; Rossing, Mary Anne; Moysich, Kirsten B; Chang-Claude, Jenny; Doherty, Jennifer A; Gentry-Maharaj, Aleksandra; Kiemeney, Lambertus; Gayther, Simon A; Modugno, Francesmary; Massuger, Leon; Goode, Ellen L; Fridley, Brooke L; Terry, Kathryn L; Cramer, Daniel W; Ramus, Susan J; Anton-Culver, Hoda; Ziogas, Argyrios; Tyrer, Jonathan P; Schildkraut, Joellen M; Kjaer, Susanne K; Webb, Penelope M; Ness, Roberta B; Menon, Usha; Berchuck, Andrew; Pharoah, Paul D; Risch, Harvey; Pearce, Celeste Leigh

    2018-02-01

    There have been recent proposals advocating the use of additive gene-environment interaction instead of the widely used multiplicative scale, as a more relevant public health measure. Using gene-environment independence enhances statistical power for testing multiplicative interaction in case-control studies. However, under departure from this assumption, substantial bias in the estimates and inflated type I error in the corresponding tests can occur. In this paper, we extend the empirical Bayes (EB) approach previously developed for multiplicative interaction, which trades off between bias and efficiency in a data-adaptive way, to the additive scale. An EB estimator of the relative excess risk due to interaction is derived, and the corresponding Wald test is proposed with a general regression setting under a retrospective likelihood framework. We study the impact of gene-environment association on the resultant test with case-control data. Our simulation studies suggest that the EB approach uses the gene-environment independence assumption in a data-adaptive way and provides a gain in power compared with the standard logistic regression analysis and better control of type I error when compared with the analysis assuming gene-environment independence. We illustrate the methods with data from the Ovarian Cancer Association Consortium. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects

    ERIC Educational Resources Information Center

    Ho, Andrew D.; Yu, Carol C.

    2015-01-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological…

  9. "Touch Me, Like Me": Testing an Encounter Group Assumption

    ERIC Educational Resources Information Center

    Boderman, Alvin; And Others

    1972-01-01

    An experiment to test an encounter group assumption that touching increases interpersonal attraction was conducted. College women were randomly assigned to a touch or no-touch condition. A comparison of total evaluation scores verified the hypothesis: subjects who touched the accomplice perceived her as a more attractive person than those who did…

  10. Simulation-Extrapolation for Estimating Means and Causal Effects with Mismeasured Covariates

    ERIC Educational Resources Information Center

    Lockwood, J. R.; McCaffrey, Daniel F.

    2015-01-01

    Regression, weighting and related approaches to estimating a population mean from a sample with nonrandom missing data often rely on the assumption that conditional on covariates, observed samples can be treated as random. Standard methods using this assumption generally will fail to yield consistent estimators when covariates are measured with…

  11. The Discrepancy-Induced Source Comprehension (D-ISC) Model: Basic Assumptions and Preliminary Evidence

    ERIC Educational Resources Information Center

    Braasch, Jason L. G.; Bråten, Ivar

    2017-01-01

    Despite the importance of source attention and evaluation for learning from texts, little is known about the particular conditions that encourage sourcing during reading. In this article, basic assumptions of the discrepancy-induced source comprehension (D-ISC) model are presented, which describes the moment-by-moment cognitive processes that…

  12. Links between causal effects and causal association for surrogacy evaluation in a gaussian setting.

    PubMed

    Conlon, Anna; Taylor, Jeremy; Li, Yun; Diaz-Ordaz, Karla; Elliott, Michael

    2017-11-30

    Two paradigms for the evaluation of surrogate markers in randomized clinical trials have been proposed: the causal effects paradigm and the causal association paradigm. Each of these paradigms rely on assumptions that must be made to proceed with estimation and to validate a candidate surrogate marker (S) for the true outcome of interest (T). We consider the setting in which S and T are Gaussian and are generated from structural models that include an unobserved confounder. Under the assumed structural models, we relate the quantities used to evaluate surrogacy within both the causal effects and causal association frameworks. We review some of the common assumptions made to aid in estimating these quantities and show that assumptions made within one framework can imply strong assumptions within the alternative framework. We demonstrate that there is a similarity, but not exact correspondence between the quantities used to evaluate surrogacy within each framework, and show that the conditions for identifiability of the surrogacy parameters are different from the conditions, which lead to a correspondence of these quantities. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Models for the propensity score that contemplate the positivity assumption and their application to missing data and causality.

    PubMed

    Molina, J; Sued, M; Valdora, M

    2018-06-05

    Generalized linear models are often assumed to fit propensity scores, which are used to compute inverse probability weighted (IPW) estimators. To derive the asymptotic properties of IPW estimators, the propensity score is supposed to be bounded away from zero. This condition is known in the literature as strict positivity (or positivity assumption), and, in practice, when it does not hold, IPW estimators are very unstable and have a large variability. Although strict positivity is often assumed, it is not upheld when some of the covariates are unbounded. In real data sets, a data-generating process that violates the positivity assumption may lead to wrong inference because of the inaccuracy in the estimations. In this work, we attempt to conciliate between the strict positivity condition and the theory of generalized linear models by incorporating an extra parameter, which results in an explicit lower bound for the propensity score. An additional parameter is added to fulfil the overlap assumption in the causal framework. Copyright © 2018 John Wiley & Sons, Ltd.

  14. The incompressibility assumption in computational simulations of nasal airflow.

    PubMed

    Cal, Ismael R; Cercos-Pita, Jose Luis; Duque, Daniel

    2017-06-01

    Most of the computational works on nasal airflow up to date have assumed incompressibility, given the low Mach number of these flows. However, for high temperature gradients, the incompressibility assumption could lead to a loss of accuracy, due to the temperature dependence of air density and viscosity. In this article we aim to shed some light on the influence of this assumption in a model of calm breathing in an Asian nasal cavity, by solving the fluid flow equations in compressible and incompressible formulation for different ambient air temperatures using the OpenFOAM package. At low flow rates and warm climatological conditions, similar results were obtained from both approaches, showing that density variations need not be taken into account to obtain a good prediction of all flow features, at least for usual breathing conditions. This agrees with most of the simulations previously reported, at least as far as the incompressibility assumption is concerned. However, parameters like nasal resistance and wall shear stress distribution differ for air temperatures below [Formula: see text]C approximately. Therefore, density variations should be considered for simulations at such low temperatures.

  15. Avoided economic impacts of energy demand changes by 1.5 and 2 °C climate stabilization

    NASA Astrophysics Data System (ADS)

    Park, Chan; Fujimori, Shinichiro; Hasegawa, Tomoko; Takakura, Jun’ya; Takahashi, Kiyoshi; Hijioka, Yasuaki

    2018-04-01

    Energy demand associated with space heating and cooling is expected to be affected by climate change. There are several global projections of space heating and cooling use that take into consideration climate change, but a comprehensive uncertainty of socioeconomic and climate conditions, including a 1.5 °C global mean temperature change, has never been assessed. This paper shows the economic impact of changes in energy demand for space heating and cooling under multiple socioeconomic and climatic conditions. We use three shared socioeconomic pathways as socioeconomic conditions. For climate conditions, we use two representative concentration pathways that correspond to 4.0 °C and 2.0 °C scenarios, and a 1.5 °C scenario driven from the 2.0 °C scenario with assumption in conjunction with five general circulation models. We find that the economic impacts of climate change are largely affected by socioeconomic assumptions, and global GDP change rates range from +0.21% to ‑2.01% in 2100 under the 4.0 °C scenario, depending on the socioeconomic condition. Sensitivity analysis that differentiates the thresholds of heating and cooling degree days clarifies that the threshold is a strong factor that generates these differences. Meanwhile, the impact of the 1.5 °C is small regardless of socioeconomic assumptions (‑0.02% to ‑0.06%). The economic loss caused by differences in socioeconomic assumption under the 1.5 °C scenario is much smaller than that under the 2 °C scenario, which implies that stringent climate mitigation can work as a risk hedge to socioeconomic development diversity.

  16. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable.

    PubMed

    Austin, Peter C; Steyerberg, Ewout W

    2012-06-20

    When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.

  17. Polybrominated Diphenyl Ether (DE-71)Interferes with Thyroid Hormone Action Independent Of Effects On Circulating Levels of Thyroid Hormone in Male Rats

    EPA Science Inventory

    Polybrominated diphenyl ethers (PBDEs) are routinely found in human tissues including cord blood and breast milk. PBDEs may interfere with thyroid hormone (TH) during development, which could produce neurobehavioral deficits. An assumption in experimental and epidemiological stud...

  18. Influences of Tone on Vowel Articulation in Mandarin Chinese

    ERIC Educational Resources Information Center

    Shaw, Jason A.; Chen, Wei-rong; Proctor, Michael I.; Derrick, Donald

    2016-01-01

    Purpose: Models of speech production often abstract away from shared physiology in pitch control and lingual articulation, positing independent control of tone and vowel units. We assess the validity of this assumption in Mandarin Chinese by evaluating the stability of lingual articulation for vowels across variation in tone. Method:…

  19. Radarclinometry

    USGS Publications Warehouse

    Wildey, R.L.

    1986-01-01

    A mathematical theory and a corresponding algorithm have been developed to derive topographic maps from radar images as photometric arrays. Thus, as radargrammetry is to photogrammetry, so radarclinometry is to photoclinometry. Photoclinometry is endowed with a fundamental indeterminacy principle even for terrain homogeneous in normal albedo. This arises from the fact that the geometric locus of orientations of the local surface normal that is consistent with a given reflected specific-intensity of radiation is more complicated than a fixed line in space. For a radar image, the locus is a cone whose half-angle is the incidence angle and whose axis contains the radar. The indeterminacy is removed throughout a region if one possesses a control profile as a boundary-condition. In the absence of such ground-truth, a point-boundary-condition will suffice only in conjunction with a heuristic assumption, such as that the strike-line runs perpendicularly to the line-of-sight. In the present study I have implemented a more reasonable assumption which I call 'the hypothesis of local cylindricity'. Firstly, a general theory is derived, based solely on the implicit mathematical determinacy. This theory would be directly indicative of procedure if images were completely devoid of systematic error and noise. The theory produces topography by an area integration of radar brightness, starting from a control profile, without need of additional idealistic assumptions. But we have also theorized separately a method of forming this control profile, which method does require an additional assumption about the terrain. That assumption is that the curvature properties of the terrain are locally those of a cylinder of inferable orientation, within a second-order mathematical neighborhood of every point of the terrain. While local strike-and-dip completely determine the radar brightness itself, the terrain curvature determines the brightness-gradient in the radar image. Therefore, the control profile is formed as a line integration of brightness and its local gradient starting from a single point of the terrain where the local orientation of the strike-line is estimated by eye. Secondly, and independently, the calibration curve for pixel brightness versus incidence-angle is produced. I assume that an applicable curve can be found from the literature or elsewhere so that our problem is condensed to that of properly scaling the brightness-axis of the calibration curve. A first estimate is found by equating the average image brightness to the point on the brightness axis corresponding to the complement of the effective radar depression-angle, an angle assumed given. A statistical analysis is then used to correct, on the one hand, for the fact that the average brightness is not the brightness that corresponds to the average incidence angle, as a result of the non-linearity of the calibration curve; and on the other hand, we correct for the fact that the average incidence angle is not the same for a rough surface as it is for a flat surface (and therefore not the complement of the depression angle). Lastly, the practical modifications that were interactively evolved to produce an operational algorithm for treating real data are developed. They are by no means considered optimized at present. Such a possibility is thus far precluded by excessive computer-time. Most noteworthy in this respect is the abandonment of area integration away from a control profile. Instead, the topography is produced as a set of independent line integrations down each of the parallel range lines of the image, using the theory for control-profile formation. An adaptive technique, which now appears excessive, was also employed so that SEASAT images of sand dunes could be processed. In this, the radiometric calibration was iterated to force the endpoints of each profile to zero elevation. A secondary algorithm then employed line-averages of appropriate quantities to adjust the mean t

  20. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance

    USGS Publications Warehouse

    Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.

    2017-01-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.

  1. Accounting for Non-Gaussian Sources of Spatial Correlation in Parametric Functional Magnetic Resonance Imaging Paradigms II: A Method to Obtain First-Level Analysis Residuals with Uniform and Gaussian Spatial Autocorrelation Function and Independent and Identically Distributed Time-Series.

    PubMed

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K

    2018-02-01

    In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.

  2. Capture-Recapture Estimators in Epidemiology with Applications to Pertussis and Pneumococcal Invasive Disease Surveillance

    PubMed Central

    Braeye, Toon; Verheagen, Jan; Mignon, Annick; Flipse, Wim; Pierard, Denis; Huygen, Kris; Schirvel, Carole; Hens, Niel

    2016-01-01

    Introduction Surveillance networks are often not exhaustive nor completely complementary. In such situations, capture-recapture methods can be used for incidence estimation. The choice of estimator and their robustness with respect to the homogeneity and independence assumptions are however not well documented. Methods We investigated the performance of five different capture-recapture estimators in a simulation study. Eight different scenarios were used to detect and combine case-information. The scenarios increasingly violated assumptions of independence of samples and homogeneity of detection probabilities. Belgian datasets on invasive pneumococcal disease (IPD) and pertussis provided motivating examples. Results No estimator was unbiased in all scenarios. Performance of the parametric estimators depended on how much of the dependency and heterogeneity were correctly modelled. Model building was limited by parameter estimability, availability of additional information (e.g. covariates) and the possibilities inherent to the method. In the most complex scenario, methods that allowed for detection probabilities conditional on previous detections estimated the total population size within a 20–30% error-range. Parametric estimators remained stable if individual data sources lost up to 50% of their data. The investigated non-parametric methods were more susceptible to data loss and their performance was linked to the dependence between samples; overestimating in scenarios with little dependence, underestimating in others. Issues with parameter estimability made it impossible to model all suggested relations between samples for the IPD and pertussis datasets. For IPD, the estimates for the Belgian incidence for cases aged 50 years and older ranged from 44 to58/100,000 in 2010. The estimates for pertussis (all ages, Belgium, 2014) ranged from 24.2 to30.8/100,000. Conclusion We encourage the use of capture-recapture methods, but epidemiologists should preferably include datasets for which the underlying dependency structure is not too complex, a priori investigate this structure, compensate for it within the model and interpret the results with the remaining unmodelled heterogeneity in mind. PMID:27529167

  3. Unintended Consequences or Testing the Integrity of Teachers and Students.

    ERIC Educational Resources Information Center

    Kimmel, Ernest W.

    Large-scale testing programs are generally based on the assumptions that the test-takers experience standard conditions for taking the test and that everyone will do his or her own work without having prior knowledge of specific questions. These assumptions are not necessarily true. The ways students and educators use to get around standardizing…

  4. Reliability of Children's Testimony in the Era of Developmental Reversals

    ERIC Educational Resources Information Center

    Brainerd, C. J.; Reyna, V. F.

    2012-01-01

    A hoary assumption of the law is that children are more prone to false-memory reports than adults, and hence, their testimony is less reliable than adults'. Since the 1980s, that assumption has been buttressed by numerous studies that detected declines in false memory between early childhood and young adulthood under controlled conditions.…

  5. Maximizing Research and Development Resources: Identifying and Testing "Load-Bearing Conditions" for Educational Technology Innovations

    ERIC Educational Resources Information Center

    Iriti, Jennifer; Bickel, William; Schunn, Christian; Stein, Mary Kay

    2016-01-01

    Education innovations often have a complicated set of assumptions about the contexts in which they are implemented, which may not be explicit. Education technology innovations in particular may have additional technical and cultural assumptions. As a result, education technology research and development efforts as well as scaling efforts can be…

  6. Using Random Forest Models to Predict Organizational Violence

    NASA Technical Reports Server (NTRS)

    Levine, Burton; Bobashev, Georgly

    2012-01-01

    We present a methodology to access the proclivity of an organization to commit violence against nongovernment personnel. We fitted a Random Forest model using the Minority at Risk Organizational Behavior (MAROS) dataset. The MAROS data is longitudinal; so, individual observations are not independent. We propose a modification to the standard Random Forest methodology to account for the violation of the independence assumption. We present the results of the model fit, an example of predicting violence for an organization; and finally, we present a summary of the forest in a "meta-tree,"

  7. Upper and lower bounds for semi-Markov reliability models of reconfigurable systems

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.

  8. Impact buckling of thin bars in the elastic range for any end condition

    NASA Technical Reports Server (NTRS)

    Taub, Josef

    1934-01-01

    Following a qualitative discussion of the complicated process involved in a short-period, longitudinal force applied to an originally not quite straight bar, the actual process is substituted by an idealized process for the purpose of analytical treatment. The simplifications are: the assumption of an infinitely high rate of propagation of the elastic longitudinal waves in the bar, limitation to slender bars, disregard of material damping and of rotatory inertia, the assumption of consistently small elastic deformations, the assumption of cross-sectional dimensions constant along the bar axis, the assumption of a shock-load constant in time, and the assumption of eccentricities on one plane. Then follow the mathematical principles for resolving the differential equation of the simplified problem, particularly the developability of arbitrary functions with steady first and second and intermittently steady third and fourth derivatives into one convergent series, according to the natural functions of the homogeneous differential equation.

  9. Ozone chemical equilibrium in the extended mesopause under the nighttime conditions

    NASA Astrophysics Data System (ADS)

    Belikovich, M. V.; Kulikov, M. Yu.; Grygalashvyly, M.; Sonnemann, G. R.; Ermakova, T. S.; Nechaev, A. A.; Feigin, A. M.

    2018-01-01

    For retrieval of atomic oxygen and atomic hydrogen via ozone observations in the extended mesopause region (∼70-100 km) under nighttime conditions, an assumption on photochemical equilibrium of ozone is often used in research. In this work, an assumption on chemical equilibrium of ozone near mesopause region during nighttime is proofed. We examine 3D chemistry-transport model (CTM) annual calculations and determine the ratio between the correct (modeled) distributions of the O3 density and its equilibrium values depending on the altitude, latitude, and season. The results show that the retrieval of atomic oxygen and atomic hydrogen distributions using an assumption on ozone chemical equilibrium may lead to large errors below ∼81-87 km. We give simple and clear semi-empirical criterion for practical utilization of the lower boundary of the area with ozone's chemical equilibrium near mesopause.

  10. The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model

    PubMed Central

    Fritz, Matthew S.; Kenny, David A.; MacKinnon, David P.

    2016-01-01

    Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator to outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. In order to explore the combined effect of measurement error and omitted confounders in the same model, the impact of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect. PMID:27739903

  11. Using Classroom Data to Teach Students about Data Cleaning and Testing Assumptions

    PubMed Central

    Cummiskey, Kevin; Kuiper, Shonda; Sturdivant, Rodney

    2012-01-01

    This paper discusses the influence that decisions about data cleaning and violations of statistical assumptions can have on drawing valid conclusions to research studies. The datasets provided in this paper were collected as part of a National Science Foundation grant to design online games and associated labs for use in undergraduate and graduate statistics courses that can effectively illustrate issues not always addressed in traditional instruction. Students play the role of a researcher by selecting from a wide variety of independent variables to explain why some students complete games faster than others. Typical project data sets are “messy,” with many outliers (usually from some students taking much longer than others) and distributions that do not appear normal. Classroom testing of the games over several semesters has produced evidence of their efficacy in statistics education. The projects tend to be engaging for students and they make the impact of data cleaning and violations of model assumptions more relevant. We discuss the use of one of the games and associated guided lab in introducing students to issues prevalent in real data and the challenges involved in data cleaning and dangers when model assumptions are violated. PMID:23055992

  12. Fault and event tree analyses for process systems risk analysis: uncertainty handling formulations.

    PubMed

    Ferdous, Refaul; Khan, Faisal; Sadiq, Rehan; Amyotte, Paul; Veitch, Brian

    2011-01-01

    Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed. © 2010 Society for Risk Analysis.

  13. Interpreting findings from Mendelian randomization using the MR-Egger method.

    PubMed

    Burgess, Stephen; Thompson, Simon G

    2017-05-01

    Mendelian randomization-Egger (MR-Egger) is an analysis method for Mendelian randomization using summarized genetic data. MR-Egger consists of three parts: (1) a test for directional pleiotropy, (2) a test for a causal effect, and (3) an estimate of the causal effect. While conventional analysis methods for Mendelian randomization assume that all genetic variants satisfy the instrumental variable assumptions, the MR-Egger method is able to assess whether genetic variants have pleiotropic effects on the outcome that differ on average from zero (directional pleiotropy), as well as to provide a consistent estimate of the causal effect, under a weaker assumption-the InSIDE (INstrument Strength Independent of Direct Effect) assumption. In this paper, we provide a critical assessment of the MR-Egger method with regard to its implementation and interpretation. While the MR-Egger method is a worthwhile sensitivity analysis for detecting violations of the instrumental variable assumptions, there are several reasons why causal estimates from the MR-Egger method may be biased and have inflated Type 1 error rates in practice, including violations of the InSIDE assumption and the influence of outlying variants. The issues raised in this paper have potentially serious consequences for causal inferences from the MR-Egger approach. We give examples of scenarios in which the estimates from conventional Mendelian randomization methods and MR-Egger differ, and discuss how to interpret findings in such cases.

  14. Towards a sampling strategy for the assessment of forest condition at European level: combining country estimates.

    PubMed

    Travaglini, Davide; Fattorini, Lorenzo; Barbati, Anna; Bottalico, Francesca; Corona, Piermaria; Ferretti, Marco; Chirici, Gherardo

    2013-04-01

    A correct characterization of the status and trend of forest condition is essential to support reporting processes at national and international level. An international forest condition monitoring has been implemented in Europe since 1987 under the auspices of the International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests). The monitoring is based on harmonized methodologies, with individual countries being responsible for its implementation. Due to inconsistencies and problems in sampling design, however, the ICP Forests network is not able to produce reliable quantitative estimates of forest condition at European and sometimes at country level. This paper proposes (1) a set of requirements for status and change assessment and (2) a harmonized sampling strategy able to provide unbiased and consistent estimators of forest condition parameters and of their changes at both country and European level. Under the assumption that a common definition of forest holds among European countries, monitoring objectives, parameters of concern and accuracy indexes are stated. On the basis of fixed-area plot sampling performed independently in each country, an unbiased and consistent estimator of forest defoliation indexes is obtained at both country and European level, together with conservative estimators of their sampling variance and power in the detection of changes. The strategy adopts a probabilistic sampling scheme based on fixed-area plots selected by means of systematic or stratified schemes. Operative guidelines for its application are provided.

  15. Sources of Bias in the Goodman-Kruskal Gamma Coefficient Measure of Association: Implications for Studies of Metacognitive Processes

    ERIC Educational Resources Information Center

    Masson, Michael E. J.; Rotello, Caren M.

    2009-01-01

    In many cognitive, metacognitive, and perceptual tasks, measurement of performance or prediction accuracy may be influenced by response bias. Signal detection theory provides a means of assessing discrimination accuracy independent of such bias, but its application crucially depends on distributional assumptions. The Goodman-Kruskal gamma…

  16. Revisiting Key Assumptions of the Reading Level Framework

    ERIC Educational Resources Information Center

    Halladay, Juliet L.

    2012-01-01

    Since Emmett Betts first devised a framework of independent, instructional, and frustration reading levels in the 1940s, these levels have played a large role in classroom assessment and instruction. It is important for teachers to have a deep understanding of the research that supports the reading level framework. This article identifies four key…

  17. Omnibus Tests for Interactions in Repeated Measures Designs with Dichotomous Dependent Variables.

    ERIC Educational Resources Information Center

    Serlin, Ronald C.; Marascuilo, Leonard A.

    When examining a repeated measures design with independent groups for a significant group by trial interaction, classical analysis of variance or multivariate procedures can be used if the assumptions underlying the tests are met. Neither procedure may be justified for designs with small sample sizes and dichotomous dependent variables. An omnibus…

  18. Advanced Categorical Statistics: Issues and Applications in Communication Research.

    ERIC Educational Resources Information Center

    Denham, Bryan E.

    2002-01-01

    Discusses not only the procedures, assumptions, and applications of advanced categorical statistics, but also covers some common misapplications, from which a great deal can be learned. Addresses the use and limitations of cross-tabulation and chi-square analysis, as well as issues such as observation independence and artificial inflation of a…

  19. The Pedagogy of Subversion in History Education in Conflict-Ridden Areas

    ERIC Educational Resources Information Center

    Yogev, Esther

    2013-01-01

    The article proposes an approach that is based on the assumption that the fostering of the political-critical dimension in the study of history can develop an effective historical consciousness among young students of history that will strengthen their independent informed thinking, reflective skills, and the ability to show empathy. First, I…

  20. On Information Retrieval (IR) Systems: Revisiting Their Development, Evaluation Methodologies, and Assumptions (SIGs LAN, ED).

    ERIC Educational Resources Information Center

    Stirling, Keith

    2000-01-01

    Describes a session on information retrieval systems that planned to discuss relevance measures with Web-based information retrieval; retrieval system performance and evaluation; probabilistic independence of index terms; vector-based models; metalanguages and digital objects; how users assess the reliability, timeliness and bias of information;…

  1. Regional Expertise and Culture Proficiency

    DTIC Science & Technology

    2012-09-01

    Tool for Planners ............................................................................................25 Developing and Testing REC Rating...rated each competency and each behavior as more important than those respondents who had been deployed. We computed an independent sample t- test to...each group, however, it was likely that the homogeneity of variance assumption of the t- test was violated. Therefore, we pooled the variance across

  2. A Study of Two Instructional Sequences Informed by Alternative Learning Progressions in Genetics

    ERIC Educational Resources Information Center

    Duncan, Ravit Golan; Choi, Jinnie; Castro-Faix, Moraima; Cavera, Veronica L.

    2017-01-01

    Learning progressions (LPs) are hypothetical models of how learning in a domain develops over time with appropriate instruction. In the domain of genetics, there are two independently developed alternative LPs. The main difference between the two progressions hinges on their assumptions regarding the accessibility of classical (Mendelian) versus…

  3. A Standardized Generalized Dimensionality Discrepancy Measure and a Standardized Model-Based Covariance for Dimensionality Assessment for Multidimensional Models

    ERIC Educational Resources Information Center

    Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka

    2015-01-01

    The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…

  4. Affect and Cognition: An Examination of Zajonc's Views.

    ERIC Educational Resources Information Center

    Thompson, Anne E.

    In a recent controversial article, "Feeling and Thinking: Preferences Need No Inferences" (l980), R. B. Zajonc argues in support of the independence of affect and cognition. Examination of the structure and assumptions of Zajonc's arguments suggests that they do not support the view that affect is non-cognitive. Zajonc appears to leap…

  5. Naïve Bayes classification in R.

    PubMed

    Zhang, Zhongheng

    2016-06-01

    Naïve Bayes classification is a kind of simple probabilistic classification methods based on Bayes' theorem with the assumption of independence between features. The model is trained on training dataset to make predictions by predict() function. This article introduces two functions naiveBayes() and train() for the performance of Naïve Bayes classification.

  6. Diversity and Community Can Coexist.

    PubMed

    Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael

    2016-03-01

    We examine the (in)compatibility of diversity and sense of community by means of agent-based models based on the well-known Schelling model of residential segregation and Axelrod model of cultural dissemination. We find that diversity and highly clustered social networks, on the assumptions of social tie formation based on spatial proximity and homophily, are incompatible when agent features are immutable, and this holds even for multiple independent features. We include both mutable and immutable features into a model that integrates Schelling and Axelrod models, and we find that even for multiple independent features, diversity and highly clustered social networks can be incompatible on the assumptions of social tie formation based on spatial proximity and homophily. However, this incompatibility breaks down when cultural diversity can be sufficiently large, at which point diversity and clustering need not be negatively correlated. This implies that segregation based on immutable characteristics such as race can possibly be overcome by sufficient similarity on mutable characteristics based on culture, which are subject to a process of social influence, provided a sufficiently large "scope of cultural possibilities" exists. © Society for Community Research and Action 2016.

  7. Robust Measurement via A Fused Latent and Graphical Item Response Theory Model.

    PubMed

    Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Ying, Zhiliang

    2018-03-12

    Item response theory (IRT) plays an important role in psychological and educational measurement. Unlike the classical testing theory, IRT models aggregate the item level information, yielding more accurate measurements. Most IRT models assume local independence, an assumption not likely to be satisfied in practice, especially when the number of items is large. Results in the literature and simulation studies in this paper reveal that misspecifying the local independence assumption may result in inaccurate measurements and differential item functioning. To provide more robust measurements, we propose an integrated approach by adding a graphical component to a multidimensional IRT model that can offset the effect of unknown local dependence. The new model contains a confirmatory latent variable component, which measures the targeted latent traits, and a graphical component, which captures the local dependence. An efficient proximal algorithm is proposed for the parameter estimation and structure learning of the local dependence. This approach can substantially improve the measurement, given no prior information on the local dependence structure. The model can be applied to measure both a unidimensional latent trait and multidimensional latent traits.

  8. Completely device-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Aguilar, Edgar A.; Ramanathan, Ravishankar; Kofler, Johannes; Pawłowski, Marcin

    2016-08-01

    Quantum key distribution (QKD) is a provably secure way for two distant parties to establish a common secret key, which then can be used in a classical cryptographic scheme. Using quantum entanglement, one can reduce the necessary assumptions that the parties have to make about their devices, giving rise to device-independent QKD (DIQKD). However, in all existing protocols to date the parties need to have an initial (at least partially) random seed as a resource. In this work, we show that this requirement can be dropped. Using recent advances in the fields of randomness amplification and randomness expansion, we demonstrate that it is sufficient for the message the parties want to communicate to be (partially) unknown to the adversaries—an assumption without which any type of cryptography would be pointless to begin with. One party can use her secret message to locally generate a secret sequence of bits, which can then be openly used by herself and the other party in a DIQKD protocol. Hence our work reduces the requirements needed to perform secure DIQKD and establish safe communication.

  9. Statistical foundations of liquid-crystal theory

    PubMed Central

    Seguin, Brian; Fried, Eliot

    2013-01-01

    We develop a mechanical theory for systems of rod-like particles. Central to our approach is the assumption that the external power expenditure for any subsystem of rods is independent of the underlying frame of reference. This assumption is used to derive the basic balance laws for forces and torques. By considering inertial forces on par with other forces, these laws hold relative to any frame of reference, inertial or noninertial. Finally, we introduce a simple set of constitutive relations to govern the interactions between rods and find restrictions necessary and sufficient for these laws to be consistent with thermodynamics. Our framework provides a foundation for a statistical mechanical derivation of the macroscopic balance laws governing liquid crystals. PMID:23772091

  10. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    PubMed

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Comparison of Factor Simplicity Indices for Dichotomous Data: DETECT R, Bentler's Simplicity Index, and the Loading Simplicity Index

    ERIC Educational Resources Information Center

    Finch, Holmes; Stage, Alan Kirk; Monahan, Patrick

    2008-01-01

    A primary assumption underlying several of the common methods for modeling item response data is unidimensionality, that is, test items tap into only one latent trait. This assumption can be assessed several ways, using nonlinear factor analysis and DETECT, a method based on the item conditional covariances. When multidimensionality is identified,…

  12. Empirical Benchmarks of Hidden Bias in Educational Research: Implication for Assessing How well Propensity Score Methods Approximate Experiments and Conducting Sensitivity Analysis

    ERIC Educational Resources Information Center

    Dong, Nianbo; Lipsey, Mark

    2014-01-01

    When randomized control trials (RCT) are not feasible, researchers seek other methods to make causal inference, e.g., propensity score methods. One of the underlined assumptions for the propensity score methods to obtain unbiased treatment effect estimates is the ignorability assumption, that is, conditional on the propensity score, treatment…

  13. Acting Smart: An Investigation of Assumptions and Principles Which Underpin Training and Assessment Within One Civil Construction Company.

    ERIC Educational Resources Information Center

    Searle, Jean; Kelly, Ann

    This report focuses on an investigation of assumptions and principles underpinning training and assessment programs offered by a civil construction company in Australia. Section 1 is an introduction. To conceptualize the new workplace and consider the role of literacy and numeracy in it, Section 2 addresses changing work conditions, features of a…

  14. Facilitative Dimensions in Interpersonal Relations: Verifying the Theoretical Assumptions of Carl Rogers in School, Family Education, Client-Centered Therapy, and Encounter Groups

    ERIC Educational Resources Information Center

    Tausch, Reinhard

    1978-01-01

    Summarized numerous different projects which investigated assumptions made by Carol Rogers about the necessary and sufficient conditions for significant positive change in person-to-person contact. Findings agree with Rogers' about the importance of empathy, genuineness, and respect. Presented at the Thirtieth Congress of Deutsch Gesell Schaft for…

  15. Sign Tracking, but Not Goal Tracking, is Resistant to Outcome Devaluation

    PubMed Central

    Morrison, Sara E.; Bamkole, Michael A.; Nicola, Saleem M.

    2015-01-01

    During Pavlovian conditioning, a conditioned stimulus (CS) may act as a predictor of a reward to be delivered in another location. Individuals vary widely in their propensity to engage with the CS (sign tracking) or with the site of eventual reward (goal tracking). It is often assumed that sign tracking involves the association of the CS with the motivational value of the reward, resulting in the CS acquiring incentive value independent of the outcome. However, experimental evidence for this assumption is lacking. In order to test the hypothesis that sign tracking behavior does not rely on a neural representation of the outcome, we employed a reward devaluation procedure. We trained rats on a classic Pavlovian paradigm in which a lever CS was paired with a sucrose reward, then devalued the reward by pairing sucrose with illness in the absence of the CS. We found that sign tracking behavior was enhanced, rather than diminished, following reward devaluation; thus, sign tracking is clearly independent of a representation of the outcome. In contrast, goal tracking behavior was decreased by reward devaluation. Furthermore, when we divided rats into those with high propensity to engage with the lever (sign trackers) and low propensity to engage with the lever (goal trackers), we found that nearly all of the effects of devaluation could be attributed to the goal trackers. These results show that sign tracking and goal tracking behavior may be the output of different associative structures in the brain, providing insight into the mechanisms by which reward-associated stimuli—such as drug cues—come to exert control over behavior in some individuals. PMID:26733783

  16. Randomness Amplification under Minimal Fundamental Assumptions on the Devices

    NASA Astrophysics Data System (ADS)

    Ramanathan, Ravishankar; Brandão, Fernando G. S. L.; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Wojewódka, Hanna

    2016-12-01

    Recently, the physically realistic protocol amplifying the randomness of Santha-Vazirani sources producing cryptographically secure random bits was proposed; however, for reasons of practical relevance, the crucial question remained open regarding whether this can be accomplished under the minimal conditions necessary for the task. Namely, is it possible to achieve randomness amplification using only two no-signaling components and in a situation where the violation of a Bell inequality only guarantees that some outcomes of the device for specific inputs exhibit randomness? Here, we solve this question and present a device-independent protocol for randomness amplification of Santha-Vazirani sources using a device consisting of two nonsignaling components. We show that the protocol can amplify any such source that is not fully deterministic into a fully random source while tolerating a constant noise rate and prove the composable security of the protocol against general no-signaling adversaries. Our main innovation is the proof that even the partial randomness certified by the two-party Bell test [a single input-output pair (u* , x* ) for which the conditional probability P (x*|u*) is bounded away from 1 for all no-signaling strategies that optimally violate the Bell inequality] can be used for amplification. We introduce the methodology of a partial tomographic procedure on the empirical statistics obtained in the Bell test that ensures that the outputs constitute a linear min-entropy source of randomness. As a technical novelty that may be of independent interest, we prove that the Santha-Vazirani source satisfies an exponential concentration property given by a recently discovered generalized Chernoff bound.

  17. Critical porosity of gas enclosure in polar firn independent of climate

    NASA Astrophysics Data System (ADS)

    Florian Schaller, Christoph; Freitag, Johannes; Eisen, Olaf

    2017-11-01

    In order to interpret the paleoclimatic record stored in the air enclosed in polar ice cores, it is crucial to understand the fundamental lock-in process. Within the porous firn, bubbles are sealed continuously until the respective horizontal layer reaches a critical porosity. Present-day firn air models use a postulated temperature dependence of this value as the only parameter to adjust to the surrounding conditions of individual sites. However, no direct measurements of the firn microstructure could confirm these assumptions. Here we show that the critical porosity is a climate-independent constant by providing an extensive data set of micrometer-resolution 3-D X-ray computer tomographic measurements for ice cores representing different extremes of the temperature and accumulation ranges. We demonstrate why indirect measurements suggest a climatic dependence and substantiate our observations by applying percolation theory as a theoretical framework for bubble trapping. The incorporation of our results significantly influences the dating of trace gas records, changing gas-age-ice-age differences by up to more than 1000 years. This may further help resolve inconsistencies, such as differences between East Antarctic δ15N records (as a proxy for firn height) and model results. We expect our findings to be the basis for improved firn air and densification models, leading to lower dating uncertainties. The reduced coupling of proxies and surrounding conditions may allow for more sophisticated reinterpretations of trace gas records in terms of paleoclimatic changes and will benefit the development of new proxies, such as the air content as a marker of local insolation.

  18. A Repeated Trajectory Class Model for Intensive Longitudinal Categorical Outcome

    PubMed Central

    Lin, Haiqun; Han, Ling; Peduzzi, Peter N.; Murphy, Terrence E.; Gill, Thomas M.; Allore, Heather G.

    2014-01-01

    This paper presents a novel repeated latent class model for a longitudinal response that is frequently measured as in our prospective study of older adults with monthly data on activities of daily living (ADL) for more than ten years. The proposed method is especially useful when the longitudinal response is measured much more frequently than other relevant covariates. The repeated trajectory classes represent distinct temporal patterns of the longitudinal response wherein an individual’s membership in the trajectory classes may renew or change over time. Within a trajectory class, the longitudinal response is modeled by a class-specific generalized linear mixed model. Effectively, an individual may remain in a trajectory class or switch to another as the class membership predictors are updated periodically over time. The identification of a common set of trajectory classes allows changes among the temporal patterns to be distinguished from local fluctuations in the response. An informative event such as death is jointly modeled by class-specific probability of the event through shared random effects. We do not impose the conditional independence assumption given the classes. The method is illustrated by analyzing the change over time in ADL trajectory class among 754 older adults with 70500 person-months of follow-up in the Precipitating Events Project. We also investigate the impact of jointly modeling the class-specific probability of the event on the parameter estimates in a simulation study. The primary contribution of our paper is the periodic updating of trajectory classes for a longitudinal categorical response without assuming conditional independence. PMID:24519416

  19. Detecting Unsteady Blade Row Interaction in a Francis Turbine using a Phase-Lag Boundary Condition

    NASA Astrophysics Data System (ADS)

    Wouden, Alex; Cimbala, John; Lewis, Bryan

    2013-11-01

    For CFD simulations in turbomachinery, methods are typically used to reduce the computational cost. For example, the standard periodic assumption reduces the underlying mesh to a single blade passage in axisymmetric applications. If the simulation includes only a single array of blades with an uniform inlet condition, this assumption is adequate. However, to compute the interaction between successive blade rows of differing periodicity in an unsteady simulation, the periodic assumption breaks down and may produce inaccurate results. As a viable alternative the phase-lag boundary condition assumes that the periodicity includes a temporal component which, if considered, allows for a single passage to be modeled per blade row irrespective of differing periodicity. Prominently used in compressible CFD codes for the analysis of gas turbines/compressors, the phase-lag boundary condition is adapted to analyze the interaction between the guide vanes and rotor blades in an incompressible simulation of the 1989 GAMM Workshop Francis turbine using OpenFOAM. The implementation is based on the ``direct-storage'' method proposed in 1977 by Erdos and Alzner. The phase-lag simulation is compared with available data from the GAMM workshop as well as a full-wheel simulation. Funding provided by DOE Award number: DE-EE0002667.

  20. Accounting for imperfect detection of groups and individuals when estimating abundance.

    PubMed

    Clement, Matthew J; Converse, Sarah J; Royle, J Andrew

    2017-09-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.

  1. Geodesy of Amalthea and the Galilean Satellites of Jupiter

    NASA Astrophysics Data System (ADS)

    Schubert, G.; Anderson, J. D.; Jacobson, R. A.; Lau, E. L.; Moore, W. B.; Palguta, J.

    2003-12-01

    An important scientific legacy of the Galileo mission is the determination of the masses and quadrupole components of the gravitational fields of the Galilean satellites. A final report of the mission results is given including values of GM (G is the universal gravitational constant, M is satellite mass), the gravitational coefficients J2 and C22, and the correlation coefficient μ between J2 and C22. The values of J2 and C22 are deduced using the a priori assumption J2 = (10/3)C22. The least squares method for fitting the Doppler residuals does not fix this ratio, but allows J2 and C22 to vary independently and determines the correlation between them. The a priori assumption is consistent with the hydrostatic equilibrium of a satellite, but it does not require hydrostaticity. Values of μ show that J2 and C22 are independently determined only for Io; the ratio of J2 and C22 is consistent with a hydrostatic Io. J2 and C22 are not independently determined for Ganymede even though there are both equatorial and polar flybys of the satellite. A quadrupole field is insufficient to fit the Ganymede data to the noise level. The additional signal is interpreted in terms of mascon anomalies at the surface of Ganymede. The gravitational coefficients, together with the assumption that the degree~2 gravitational fields of the satellites derive from their hydrostatic distortions to rotation and the Jovian tidal force, are used to infer the moments of inertia of the satellites and their internal structures. The mass and closest approach distance for Amalthea can be determined from Doppler data from the Galileo encounter of 5~November 2002. The final results indicate a density that is significantly smaller than the approximate 1000~kg\\ m-3 density of water ice. The quadrupole components of Amalthea's gravitational field are undetectable in the encounter Doppler data.

  2. Accounting for imperfect detection of groups and individuals when estimating abundance

    USGS Publications Warehouse

    Clement, Matthew J.; Converse, Sarah J.; Royle, J. Andrew

    2017-01-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.

  3. Rambling and trembling in response to body loading.

    PubMed

    Tahayor, Behdad; Riley, Zachary A; Mahmoudian, Armaghan; Koceja, David M; Hong, Siang Lee

    2012-04-01

    Various studies have suggested that postural sway is controlled by at least two subsystems. Rambling-Trembling analysis is a widely accepted methodology to dissociate the signals generated by these two hypothetical subsystems. The core assumption of this method is based on the equilibrium point hypothesis which suggests that the central nervous system preserves upright standing by transiently shifting the center of pressure (COP) from one equilibrium point to another. The trajectory generated by this shifting is referred to as rambling and its difference from the original COP signal is referred to as trembling. In this study we showed that these two components of COP are differentially affected when standing with external loads. Using Detrended Fluctuation analysis, we compared the pattern of these two signals in different configurations of body loading. Our findings suggest that by applying an external load, the dynamics of the trembling component is altered independently of the area of postural sway and also independently of the rambling component. The dynamics of rambling changed only during the backloading condition in which the postural sway area also substantially increased. It can be suggested that during loaded standing, the trembling mechanism (which is suggested to be activated by peripheral mechanisms and reflexes) is altered without affecting the central influence on the shifts of the equilibrium point.

  4. Radiation force on absorbing targets and power measurements of a high intensity focused ultrasound (HIFU) source

    NASA Astrophysics Data System (ADS)

    Qian, Zuwen; Zhu, Zhemin; Ye, Shigong; Jiang, Wenhua; Zhu, Houqing; Yu, Jinshen

    2010-10-01

    Based on the analytic expressions for the radiated field of a circular concave piston given by Hasegawa et al., an integral for calculation of the radiation force on a plane absorbing target in a spherically focused field is derived. A general relation between acoustic power P and normal radiation force F n is obtained under the condition of kr ≫ 1. Numerical computation is carried out by using the symbolic computation program for practically focused sources and absorbing circular targets. The results show that, for a given source, there is a range of target positions where the radiation force is independent of the target’s position under the assumption that the contribution of the acoustic field behind the target to the radiation force can be neglected. The experiments are carried out and confirm that there is a range of target positions where the measured radiation force is basically independent of the target’s position even at high acoustic power (up to 700 W). It is believed that when the radiation force method is used to measure the acoustic power radiated from a focused source, the size of the target must be selected in such a way that no observable sound can be found in the region behind the target.

  5. Surface Uplift Rate Constrained by Multiple Terrestrial Cosmogenic Nuclides: Theory and Application from the Central Andean Plateau

    NASA Astrophysics Data System (ADS)

    McPhillips, D. F.; Hoke, G. D.; Niedermann, S.; Wittmann, H.

    2015-12-01

    There is widespread interest in quantifying the growth and decay of topography. However, prominent methods for quantitative determinations of paleoelevation rely on assumptions that are often difficult to test. For example, stable isotope paleoaltimetry relies on the knowledge of past lapse rates and moisture sources. Here, we demonstrate how cosmogenic 10Be - 21Ne and/or 10Be - 26Al sample pairs can be applied to provide independent estimates of surface uplift rate using both published data and new data from the Atacama Desert. Our approach requires a priori knowledge of the maximum age of exposure of the sampled surface. Ignimbrite surfaces provide practical sampling targets. When erosion is very slow (roughly, ≤1 m/Ma), it is often possible to constrain paleo surface uplift rate with precision comparable to that of stable isotopic methods (approximately ±50%). The likelihood of a successful measurement is increased by taking n samples from a landscape surface and solving for one regional paleo surface uplift rate and n local erosion rates. In northern Chile, we solve for surface uplift and erosion rates using three sample groups from the literature (Kober et al., 2007). In the two lower elevation groups, we calculate surface uplift rates of 110 (+60/-12) m/Myr and 160 (+120/-6) m/Myr and estimate uncertainties with a bootstrap approach. The rates agree with independent estimates derived from stream profile analyses nearby (Hoke et al., 2007). Our calculated uplift rates correspond to total uplift of 1200 and 850 m, respectively, when integrated over appropriate timescales. Erosion rates were too high to reliably calculate the uplift rate in the third, high elevation group. New cosmogenic nuclide analyses from the Atacama Desert are in progress, and preliminary results are encouraging. In particular, a replicate sample in the vicinity of the first Kober et al. (2007) group independently yields a surface uplift rate of 110 m/Myr. Compared to stable isotope proxies, cosmogenic nuclides potentially provide better constraints on surface uplift in places where assumptions about paleo-atmospheric conditions are hard to constrain and justify. F. S. Kober et al. (2007), Geomorphology, 83, 97-110. G. D. Hoke et al. (2007), Tectonics, 26, doi:10.1029/2006TC002082.

  6. High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei

    2018-01-01

    Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80 Gb ×45.6 Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114 bits /s , with a failure probability less than 10-5. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.

  7. Decoy-state quantum key distribution with a leaky source

    NASA Astrophysics Data System (ADS)

    Tamaki, Kiyoshi; Curty, Marcos; Lucamarini, Marco

    2016-06-01

    In recent years, there has been a great effort to prove the security of quantum key distribution (QKD) with a minimum number of assumptions. Besides its intrinsic theoretical interest, this would allow for larger tolerance against device imperfections in the actual implementations. However, even in this device-independent scenario, one assumption seems unavoidable, that is, the presence of a protected space devoid of any unwanted information leakage in which the legitimate parties can privately generate, process and store their classical data. In this paper we relax this unrealistic and hardly feasible assumption and introduce a general formalism to tackle the information leakage problem in most of existing QKD systems. More specifically, we prove the security of optical QKD systems using phase and intensity modulators in their transmitters, which leak the setting information in an arbitrary manner. We apply our security proof to cases of practical interest and show key rates similar to those obtained in a perfectly shielded environment. Our work constitutes a fundamental step forward in guaranteeing implementation security of quantum communication systems.

  8. Weak annihilation and new physics in charmless [Formula: see text] decays.

    PubMed

    Bobeth, Christoph; Gorbahn, Martin; Vickers, Stefan

    We use currently available data of nonleptonic charmless 2-body [Formula: see text] decays ([Formula: see text]) that are mediated by [Formula: see text] QCD- and QED-penguin operators to study weak annihilation and new-physics effects in the framework of QCD factorization. In particular we introduce one weak-annihilation parameter for decays related by [Formula: see text] quark interchange and test this universality assumption. Within the standard model, the data supports this assumption with the only exceptions in the [Formula: see text] system, which exhibits the well-known "[Formula: see text] puzzle", and some tensions in [Formula: see text]. Beyond the standard model, we simultaneously determine weak-annihilation and new-physics parameters from data, employing model-independent scenarios that address the "[Formula: see text] puzzle", such as QED-penguins and [Formula: see text] current-current operators. We discuss also possibilities that allow further tests of our assumption once improved measurements from LHCb and Belle II become available.

  9. Bell violation using entangled photons without the fair-sampling assumption.

    PubMed

    Giustina, Marissa; Mech, Alexandra; Ramelow, Sven; Wittmann, Bernhard; Kofler, Johannes; Beyer, Jörn; Lita, Adriana; Calkins, Brice; Gerrits, Thomas; Nam, Sae Woo; Ursin, Rupert; Zeilinger, Anton

    2013-05-09

    The violation of a Bell inequality is an experimental observation that forces the abandonment of a local realistic viewpoint--namely, one in which physical properties are (probabilistically) defined before and independently of measurement, and in which no physical influence can propagate faster than the speed of light. All such experimental violations require additional assumptions depending on their specific construction, making them vulnerable to so-called loopholes. Here we use entangled photons to violate a Bell inequality while closing the fair-sampling loophole, that is, without assuming that the sample of measured photons accurately represents the entire ensemble. To do this, we use the Eberhard form of Bell's inequality, which is not vulnerable to the fair-sampling assumption and which allows a lower collection efficiency than other forms. Technical improvements of the photon source and high-efficiency transition-edge sensors were crucial for achieving a sufficiently high collection efficiency. Our experiment makes the photon the first physical system for which each of the main loopholes has been closed, albeit in different experiments.

  10. THE ROLES OF COMBAT EXPOSURE, PERSONAL VULNERABILITY, AND INVOLVEMENT IN HARM TO CIVILIANS OR PRISONERS IN VIETNAM WAR-RELATED POSTTRAUMATIC STRESS DISORDER

    PubMed Central

    Dohrenwend, Bruce P.; Yager, Thomas J.; Wall, Melanie M.; Adams, Ben G.

    2014-01-01

    The diagnosis, Posttraumatic Stress Disorder, was introduced in 1980 amidst debate about the psychiatric toll of the Vietnam War. There is controversy, however, about its central assumption that potentially traumatic stressors are more important than personal vulnerability in causing the disorder. We tested this assumption with data from a rigorously diagnosed male subsample (n = 260) from the National Vietnam Veterans Readjustment Study. Combat exposure, pre-war vulnerability, and involvement in harming civilians or prisoners were examined, with only combat exposure proving necessary for disorder onset. While none of the three factors proved sufficient, estimated onset reached 97% for veterans high on all three, with harm to civilians or prisoners showing the largest independent contribution. Severity of combat exposure proved more important than pre-war vulnerability in onset; pre-war vulnerability at least as important in long-term persistence. Implications for the primacy of the stressor assumption, further research, and policy are discussed. PMID:25309830

  11. A novel measurand independent of the distance between the source and detector for continuous wave near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Kiguchi, Masashi; Funane, Tsukasa; Sato, Hiroki

    2017-06-01

    A new measurand is proposed for use in continuous wave near-infrared spectroscopy (cw-NIRS). The conventional measurand of cw-NIRS is l△c, which is the product of the change in the hemoglobin concentration (△c) and the partial path lengh (l), which depends on the source-detector (SD) distance (d). The SD distance must remain constant during cw-NIRS measurements, and we cannot compare the l△c value with that obtained using a different SD distance. In addition, the conventional measurand obtained using the standard measurement style sometimes includes a contribution from the human scalp. The SD distance independent (SID) measurand obtained using multi-SD distances is proportional to the product of the change in hemoglobin concentration and the derivative of the partial path length for the deep region with no scalp contribution under the assumption of a layer model. The principle of SID was validated by the layered phantom study. In order to check the limitation of assumption, a human study was conducted. The value of the SID measurand for the left side of the forehead during working memory task was approximately independent of the SD distance between 16 and 32 mm. The SID measurand and the standardized optode arrangement using flexible SD distances in a head coordinate system must be helpful for comparing the data in a population study.

  12. On Nomological Validity and Auxiliary Assumptions: The Importance of Simultaneously Testing Effects in Social Cognitive Theories Applied to Health Behavior and Some Guidelines

    PubMed Central

    Hagger, Martin S.; Gucciardi, Daniel F.; Chatzisarantis, Nikos L. D.

    2017-01-01

    Tests of social cognitive theories provide informative data on the factors that relate to health behavior, and the processes and mechanisms involved. In the present article, we contend that tests of social cognitive theories should adhere to the principles of nomological validity, defined as the degree to which predictions in a formal theoretical network are confirmed. We highlight the importance of nomological validity tests to ensure theory predictions can be disconfirmed through observation. We argue that researchers should be explicit on the conditions that lead to theory disconfirmation, and identify any auxiliary assumptions on which theory effects may be conditional. We contend that few researchers formally test the nomological validity of theories, or outline conditions that lead to model rejection and the auxiliary assumptions that may explain findings that run counter to hypotheses, raising potential for ‘falsification evasion.’ We present a brief analysis of studies (k = 122) testing four key social cognitive theories in health behavior to illustrate deficiencies in reporting theory tests and evaluations of nomological validity. Our analysis revealed that few articles report explicit statements suggesting that their findings support or reject the hypotheses of the theories tested, even when findings point to rejection. We illustrate the importance of explicit a priori specification of fundamental theory hypotheses and associated auxiliary assumptions, and identification of the conditions which would lead to rejection of theory predictions. We also demonstrate the value of confirmatory analytic techniques, meta-analytic structural equation modeling, and Bayesian analyses in providing robust converging evidence for nomological validity. We provide a set of guidelines for researchers on how to adopt and apply the nomological validity approach to testing health behavior models. PMID:29163307

  13. Statistical Trajectory Estimation Program (STEP) implementation for BLDT post flight trajectory simulation

    NASA Technical Reports Server (NTRS)

    Shields, W. E.

    1973-01-01

    Tests were conducted to provide flight conditions for qualifying the Viking Decelerator System in a simulated Mars environment. A balloon launched decelerator test (BLDT) vehicle which has an external shape similar to the actual Mars Viking Lander Capsule was used so that the decelerator would be deployed in the wake of a blunt body. An effort was made to simulate the BLDT vehicle flights from the time they were dropped from the balloon, through decelerator deployment, until stable decelerator conditions were reached. The procedure used to simulate these flights using the Statistical Trajectory Estimation Program (STEP) is discussed. Using primarily ground-based position radar and vehicle onboard rate gyro and accelerometer data, the STEP produces a minimum variance solution of the vehicle trajectory and calculates vehicle attitude histories. Using film from cameras in the vehicle along with a computer program, attitude histories for portions of the flight before and after decelerator deployment were calculated independent of the STEP simulation. With the assumption that the vehicle motions derived from camera data are accurate, a comparison reveals that STEP was able to simulate vehicle motions for all flights both before and after decelerator deployment.

  14. Estimating Aquifer Properties Using Sinusoidal Pumping Tests

    NASA Astrophysics Data System (ADS)

    Rasmussen, T. C.; Haborak, K. G.; Young, M. H.

    2001-12-01

    We develop the theoretical and applied framework for using sinusoidal pumping tests to estimate aquifer properties for confined, leaky, and partially penetrating conditions. The framework 1) derives analytical solutions for three boundary conditions suitable for many practical applications, 2) validates the analytical solutions against a finite element model, 3) establishes a protocol for conducting sinusoidal pumping tests, and 4) estimates aquifer hydraulic parameters based on the analytical solutions. The analytical solutions to sinusoidal stimuli in radial coordinates are derived for boundary value problems that are analogous to the Theis (1935) confined aquifer solution, the Hantush and Jacob (1955) leaky aquifer solution, and the Hantush (1964) partially penetrated confined aquifer solution. The analytical solutions compare favorably to a finite-element solution of a simulated flow domain, except in the region immediately adjacent to the pumping well where the implicit assumption of zero borehole radius is violated. The procedure is demonstrated in one unconfined and two confined aquifer units near the General Separations Area at the Savannah River Site, a federal nuclear facility located in South Carolina. Aquifer hydraulic parameters estimated using this framework provide independent confirmation of parameters obtained from conventional aquifer tests. The sinusoidal approach also resulted in the elimination of investigation-derived wastes.

  15. A generalized population dynamics model for reproductive interference with absolute density dependence.

    PubMed

    Kyogoku, Daisuke; Sota, Teiji

    2017-05-17

    Interspecific mating interactions, or reproductive interference, can affect population dynamics, species distribution and abundance. Previous population dynamics models have assumed that the impact of frequency-dependent reproductive interference depends on the relative abundances of species. However, this assumption could be an oversimplification inappropriate for making quantitative predictions. Therefore, a more general model to forecast population dynamics in the presence of reproductive interference is required. Here we developed a population dynamics model to describe the absolute density dependence of reproductive interference, which appears likely when encounter rate between individuals is important. Our model (i) can produce diverse shapes of isoclines depending on parameter values and (ii) predicts weaker reproductive interference when absolute density is low. These novel characteristics can create conditions where coexistence is stable and independent from the initial conditions. We assessed the utility of our model in an empirical study using an experimental pair of seed beetle species, Callosobruchus maculatus and Callosobruchus chinensis. Reproductive interference became stronger with increasing total beetle density even when the frequencies of the two species were kept constant. Our model described the effects of absolute density and showed a better fit to the empirical data than the existing model overall.

  16. How Can Clients Improve the Quality of Transport Infrastructure Projects? The Role of Knowledge Management and Incentives

    PubMed Central

    Warsame, Abukar; Borg, Lena; Lind, Hans

    2013-01-01

    The aim of this paper is to argue for a number of statements about what is important for a client to do in order to improve quality in new infrastructure projects, with a focus on procurement and organizational issues. The paper synthesizes theoretical and empirical results concerning organizational performance, especially the role of the client for the quality of a project. The theoretical framework used is contract theory and transaction cost theory, where assumptions about rationality and self-interest are made and where incentive problems, asymmetric information, and moral hazard are central concepts. It is argued that choice of procurement type will not be a crucial factor. There is no procurement method that guarantees a better quality than another. We argue that given the right conditions all procurement methods can give good results, and given the wrong conditions, all of them can lead to low quality. What is crucial is how the client organization manages knowledge and the incentives for the members of the organization. This can be summarized as “organizational culture.” One way to improve knowledge and create incentives is to use independent second opinions in a systematic way. PMID:24250274

  17. The Emperors sham - wrong assumption that sham needling is sham.

    PubMed

    Lundeberg, Thomas; Lund, Iréne; Näslund, Jan; Thomas, Moolamanil

    2008-12-01

    During the last five years a large number of randomised controlled clinical trials (RCTs) have been published on the efficacy of acupuncture in different conditions. In most of these studies verum is compared with sham acupuncture. In general both verum and sham have been found to be effective, and often with little reported difference in outcome. This has repeatedly led to the conclusion that acupuncture is no more effective than placebo treatment. However, this conclusion is based on the assumption that sham acupuncture is inert. Since sham acupuncture evidently is merely another form of acupuncture from the physiological perspective, the assumption that sham is sham is incorrect and conclusions based on this assumption are therefore invalid. Clinical guidelines based on such conclusions may therefore exclude suffering patients from valuable treatments.

  18. How to validate similarity in linear transform models of event-related potentials between experimental conditions?

    PubMed

    Cong, Fengyu; Lin, Qiu-Hua; Astikainen, Piia; Ristaniemi, Tapani

    2014-10-30

    It is well-known that data of event-related potentials (ERPs) conform to the linear transform model (LTM). For group-level ERP data processing using principal/independent component analysis (PCA/ICA), ERP data of different experimental conditions and different participants are often concatenated. It is theoretically assumed that different experimental conditions and different participants possess the same LTM. However, how to validate the assumption has been seldom reported in terms of signal processing methods. When ICA decomposition is globally optimized for ERP data of one stimulus, we gain the ratio between two coefficients mapping a source in brain to two points along the scalp. Based on such a ratio, we defined a relative mapping coefficient (RMC). If RMCs between two conditions for an ERP are not significantly different in practice, mapping coefficients of this ERP between the two conditions are statistically identical. We examined whether the same LTM of ERP data could be applied for two different stimulus types of fearful and happy facial expressions. They were used in an ignore oddball paradigm in adult human participants. We found no significant difference in LTMs (based on ICASSO) of N170 responses to the fearful and the happy faces in terms of RMCs of N170. We found no methods for straightforward comparison. The proposed RMC in light of ICA decomposition is an effective approach for validating the similarity of LTMs of ERPs between experimental conditions. This is very fundamental to apply group-level PCA/ICA to process ERP data. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Mark S.; Son Wonmin; Heaney, Libby

    Recently, it was demonstrated by Son et al., Phys. Rev. Lett. 102, 110404 (2009), that a separable bipartite continuous-variable quantum system can violate the Clauser-Horne-Shimony-Holt (CHSH) inequality via operationally local transformations. Operationally local transformations are parametrized only by local variables; however, in order to allow violation of the CHSH inequality, a maximally entangled ancilla was necessary. The use of the entangled ancilla in this scheme caused the state under test to become dependent on the measurement choice one uses to calculate the CHSH inequality, thus violating one of the assumptions used in deriving a Bell inequality, namely, the free willmore » or statistical independence assumption. The novelty in this scheme however is that the measurement settings can be external free parameters. In this paper, we generalize these operationally local transformations for multipartite Bell inequalities (with dichotomic observables) and provide necessary and sufficient conditions for violation within this scheme. Namely, a violation of a multipartite Bell inequality in this setting is contingent on whether an ancillary system admits any realistic local hidden variable model (i.e., whether the ancilla violates the given Bell inequality). These results indicate that violation of a Bell inequality performed on a system does not necessarily imply that the system is nonlocal. In fact, the system under test may be completely classical. However, nonlocality must have resided somewhere, this may have been in the environment, the physical variables used to manipulate the system or the detectors themselves provided the measurement settings are external free variables.« less

  20. A New Method for Partial Correction of Residual Confounding in Time-Series and Other Observational Studies.

    PubMed

    Flanders, W Dana; Strickland, Matthew J; Klein, Mitchel

    2017-05-15

    Methods exist to detect residual confounding in epidemiologic studies. One requires a negative control exposure with 2 key properties: 1) conditional independence of the negative control and the outcome (given modeled variables) absent confounding and other model misspecification, and 2) associations of the negative control with uncontrolled confounders and the outcome. We present a new method to partially correct for residual confounding: When confounding is present and our assumptions hold, we argue that estimators from models that include a negative control exposure with these 2 properties tend to be less biased than those from models without it. Using regression theory, we provide theoretical arguments that support our claims. In simulations, we empirically evaluated the approach using a time-series study of ozone effects on asthma emergency department visits. In simulations, effect estimators from models that included the negative control exposure (ozone concentrations 1 day after the emergency department visit) had slightly or modestly less residual confounding than those from models without it. Theory and simulations show that including the negative control can reduce residual confounding, if our assumptions hold. Our method differs from available methods because it uses a regression approach involving an exposure-based indicator rather than a negative control outcome to partially correct for confounding. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. A Minimalist Analysis of English Topicalization: A Phase-Based Cartographic Complementizer Phrase (CP) Perspective.

    PubMed

    Tanaka, Hiroyoshi

    Under the basic tenet that syntactic derivation offers an optimal solution to both phonological realization and semantic interpretation of linguistic expression, the recent minimalist framework of syntactic theory claims that the basic unit for the derivation is equivalent to a syntactic propositional element, which is called a phase. In this analysis, syntactic derivation is assumed to proceed at phasal projections that include Complementizer Phrases (CP). However, there have been pointed out some empirical problems with respect to the failure of multiple occurrences of discourse-related elements in the CP domain. This problem can be easily overcome if the alternative approach in the recent minimalist perspective, which is called Cartographic CP analysis, is adopted, but this may raise a theoretical issue about the tension between phasality and four kinds of functional projections assumed in this analysis (Force Phrase (ForceP), Finite Phrase (FinP), Topic Phrase (TopP) and Focus Phrase (FocP)). This paper argues that a hybrid analysis with these two influential approaches can be proposed by claiming a reasonable assumption that syntactically requisite projections (i.e., ForceP and FinP) are phases and independently constitute a phasehood with relevant heads in the derivation. This then enables us to capture various syntactic properties of the Topicalization construction in English. Our proposed analysis, coupled with some additional assumptions and observations in recent minimalist studies, can be extended to incorporate peculiar properties in temporal/conditional adverbials and imperatives.

  2. Technical Consultation of the Hubble Space Telescope (HST) Nickel Hydrogen (NiH2) Battery Charge Capacity Prediction. Version 1.0

    NASA Technical Reports Server (NTRS)

    Gentz, Steven J.; Pandipati, Radha; Ling, Jerri; Miller, Thomas; Jeevarajan, Judith; Halpert, Gerald; Zimmerman, Albert

    2005-01-01

    The purpose of the GSFC position paper is to identify critical HST milestone dates for continued science studies followed by the attachment of a re-entry module or a robotic servicing mission. The paper examines the viability of the HST with respect to the NiH2 continued battery charge capacity. In the course of the assessment, it was recognized that the HST battery thermal control system has an average heat dissipation limitation of 30 W per bay per orbit cycle. This thermal constraint will continue to govern options for battery capacity maintenance. In addition, the HST usage represents the longest exposure ofNiH2 batteries to Low Earth Orbit (LEO) at the current level of Depth of Discharge (DOD). Finally, the current battery life is at the limit predicted by the manufacturer, Eaglepicher. Therefore, given these factors, the potential exists that the HST battery capacities could radically degrade at any point. Given this caveat on any life extrapolations, the conservative model proposed in the GSFC position paper was viewed by the NESC as having several technical assumptions such as limited utilization of flight battery capacity data, the susceptibility of the proposed prediction method to large variations when supplemented with additional information, and the failure to qualitatively or quantitatively assess life prediction sensitivities. The NESC conducted an independent evaluation of the supporting information and assumptions to generate the predictions for battery capacity loss and practicality of on-orbit battery conditioning.

  3. Practical statistics in pain research.

    PubMed

    Kim, Tae Kyun

    2017-10-01

    Pain is subjective, while statistics related to pain research are objective. This review was written to help researchers involved in pain research make statistical decisions. The main issues are related with the level of scales that are often used in pain research, the choice of statistical methods between parametric or nonparametric statistics, and problems which arise from repeated measurements. In the field of pain research, parametric statistics used to be applied in an erroneous way. This is closely related with the scales of data and repeated measurements. The level of scales includes nominal, ordinal, interval, and ratio scales. The level of scales affects the choice of statistics between parametric or non-parametric methods. In the field of pain research, the most frequently used pain assessment scale is the ordinal scale, which would include the visual analogue scale (VAS). There used to be another view, however, which considered the VAS to be an interval or ratio scale, so that the usage of parametric statistics would be accepted practically in some cases. Repeated measurements of the same subjects always complicates statistics. It means that measurements inevitably have correlations between each other, and would preclude the application of one-way ANOVA in which independence between the measurements is necessary. Repeated measures of ANOVA (RMANOVA), however, would permit the comparison between the correlated measurements as long as the condition of sphericity assumption is satisfied. Conclusively, parametric statistical methods should be used only when the assumptions of parametric statistics, such as normality and sphericity, are established.

  4. The Effect of Multicollinearity and the Violation of the Assumption of Normality on the Testing of Hypotheses in Regression Analysis.

    ERIC Educational Resources Information Center

    Vasu, Ellen S.; Elmore, Patricia B.

    The effects of the violation of the assumption of normality coupled with the condition of multicollinearity upon the outcome of testing the hypothesis Beta equals zero in the two-predictor regression equation is investigated. A monte carlo approach was utilized in which three differenct distributions were sampled for two sample sizes over…

  5. The Role of Cloud Contamination, Aerosol Layer Height and Aerosol Model in the Assessment of the OMI Near-UV Retrievals Over the Ocean

    NASA Technical Reports Server (NTRS)

    Gasso, Santiago; Torres, Omar

    2016-01-01

    Retrievals of aerosol optical depth (AOD) at 388 nm over the ocean from the Ozone Monitoring Instrument (OMI) two-channel near-UV algorithm (OMAERUV) have been compared with independent AOD measurements. The analysis was carried out over the open ocean (OMI and MODerate-resolution Imaging Spectrometer (MODIS) AOD comparisons) and over coastal and island sites (OMI and AERONET, the AErosol RObotic NETwork). Additionally, a research version of the retrieval algorithm (using MODIS and CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) information as constraints) was utilized to evaluate the sensitivity of the retrieval to different assumed aerosol properties. Overall, the comparison resulted in differences (OMI minus independent measurements) within the expected levels of uncertainty for the OMI AOD retrievals (0.1 for AOD less than 0.3, 30% for AOD greater than 0.3). Using examples from case studies with outliers, the reasons that led to the observed differences were examined with specific purpose to determine whether they are related to instrument limitations (i.e., pixel size, calibration) or algorithm assumptions (such as aerosol shape, aerosol height). The analysis confirms that OMAERUV does an adequate job at rejecting cloudy scenes within the instrument's capabilities. There is a residual cloud contamination in OMI pixels with quality flag 0 (the best conditions for aerosol retrieval according to the algorithm), resulting in a bias towards high AODs in OMAERUV. This bias is more pronounced at low concentrations of absorbing aerosols (AOD 388 nm approximately less than 0.5). For higher aerosol loadings, the bias remains within OMI's AOD uncertainties. In pixels where OMAERUV assigned a dust aerosol model, a fraction of them (less than 20 %) had retrieved AODs significantly lower than AERONET and MODIS AODs. In a case study, a detailed examination of the aerosol height from CALIOP and the AODs from MODIS, along with sensitivity tests, was carried out by varying the different assumed parameters in the retrieval (imaginary index of refraction, size distribution, aerosol height, particle shape). It was found that the spherical shape assumption for dust in the current retrieval is the main cause of the underestimate. In addition, it is demonstrated in an example how an incorrect assumption of the aerosol height can lead to an underestimate. Nevertheless, this is not as significant as the effect of particle shape. These findings will be incorporated in a future version of the retrieval algorithm.

  6. The Performance of Local Dependence Measures with Psychological Data

    ERIC Educational Resources Information Center

    Houts, Carrie R.; Edwards, Michael C.

    2013-01-01

    The violation of the assumption of local independence when applying item response theory (IRT) models has been shown to have a negative impact on all estimates obtained from the given model. Numerous indices and statistics have been proposed to aid analysts in the detection of local dependence (LD). A Monte Carlo study was conducted to evaluate…

  7. Using Human Givens Therapy to Support the Well-Being of Adolescents: A Case Example

    ERIC Educational Resources Information Center

    Yates, Yvonne; Atkinson, Cathy

    2011-01-01

    This article outlines the use of Human Givens (HG) therapy with adolescents reporting poor subjective well-being. HG therapy is based on the assumption that human beings have innate needs, which, if unmet, lead to emotional distress and mental health problems. Hitherto, there has been no independently published empirical research into the efficacy…

  8. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    ERIC Educational Resources Information Center

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  9. The Effect of Program Context on Childrens' Learning from a Television News Story.

    ERIC Educational Resources Information Center

    Drew, Dan; Reeves, Byron

    To test the assumptions that children's perceptions of a news program affect their learning and that their perceptions are in turn affected by age and program context, a study was undertaken involving 435 third through seventh graders. The specific independent and perceptual variables examined were: believability, liking the story and program,…

  10. Reconsidering the Simple View of Reading in an Intriguing Case of Equivalent Models: Commentary on Tunmer and Chapman (2012)

    ERIC Educational Resources Information Center

    Wagner, Richard K.; Herrera, Sarah K.; Spencer, Mercedes; Quinn, Jamie M.

    2015-01-01

    Recently, Tunmer and Chapman provided an alternative model of how decoding and listening comprehension affect reading comprehension that challenges the simple view of reading. They questioned the simple view's fundamental assumption that oral language comprehension and decoding make independent contributions to reading comprehension by arguing…

  11. Effect of faulting on ground-water movement in the Death Valley Region, Nevada and California

    USGS Publications Warehouse

    Faunt, Claudia C.

    1997-01-01

    The current crustal stress field was combined with fault orientations to predict potential effects of faults on the regional groundwater flow regime. Numerous examples of faultcontrolled ground-water flow exist within the study area. Hydrologic data provided an independent method for checking some of the assumptions concerning preferential flow paths.

  12. A Common Capacity Limitation for Response and Item Selection in Working Memory

    ERIC Educational Resources Information Center

    Janczyk, Markus

    2017-01-01

    Successful completion of any cognitive task requires selecting a particular action and the object the action is applied to. Oberauer (2009) suggested a working memory (WM) model comprising a declarative and a procedural part with analogous structures. One important assumption of this model is that both parts work independently of each other, and…

  13. Leisure Worlds: Situations, Motivations and Young People's Encounters with Offending and Victimization

    ERIC Educational Resources Information Center

    Tanner, Julian; Asbridge, Mark; Wortley, Scot

    2015-01-01

    With information supplied by a large (n = 3393) sample of high school students from Toronto, this paper tests the assumption that three forms of leisure activity--peer, risky, and self-improving leisure--have a relatively independent impact upon patterns of offending and victimization. Although we find significant support for this proposition, we…

  14. Modeling Differential Item Functioning Using a Generalization of the Multiple-Group Bifactor Model

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rijmen, Frank; Rabe-Hesketh, Sophia

    2013-01-01

    The authors present a generalization of the multiple-group bifactor model that extends the classical bifactor model for categorical outcomes by relaxing the typical assumption of independence of the specific dimensions. In addition to the means and variances of all dimensions, the correlations among the specific dimensions are allowed to differ…

  15. When Language of Instruction and Language of Application Differ: Cognitive Costs of Bilingual Mathematics Learning

    ERIC Educational Resources Information Center

    Saalbach, Henrik; Eckstein, Doris; Andri, Nicoletta; Hobi, Reto; Grabner, Roland H.

    2013-01-01

    Bilingual education programs implicitly assume that the acquired knowledge is represented in a language-independent way. This assumption, however, stands in strong contrast to research findings showing that information may be represented in a way closely tied to the specific language of instruction and learning. The present study aims to examine…

  16. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable

    PubMed Central

    2012-01-01

    Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998

  17. Determinism, independence, and objectivity are incompatible.

    PubMed

    Ionicioiu, Radu; Mann, Robert B; Terno, Daniel R

    2015-02-13

    Hidden-variable models aim to reproduce the results of quantum theory and to satisfy our classical intuition. Their refutation is usually based on deriving predictions that are different from those of quantum mechanics. Here instead we study the mutual compatibility of apparently reasonable classical assumptions. We analyze a version of the delayed-choice experiment which ostensibly combines determinism, independence of hidden variables on the conducted experiments, and wave-particle objectivity (the assertion that quantum systems are, at any moment, either particles or waves, but not both). These three ideas are incompatible with any theory, not only with quantum mechanics.

  18. Forward Osmosis Membranes under Null-Pressure Condition: Do Hydraulic and Osmotic Pressures Have Identical Nature?

    PubMed

    Kook, Seungho; Swetha, Chivukula D; Lee, Jangho; Lee, Chulmin; Fane, Tony; Kim, In S

    2018-03-20

    Forward osmosis (FO) membranes fall into the category of nonporous membranes, based on the assumption that water and solute transport occur solely based on diffusion. The solution-diffusion (S-D) model has been widely used in predicting their performances in the coexistence of hydraulic and osmotic driving forces, a model that postulates the hydraulic and osmotic driving forces have identical nature. It was suggested, however, such membranes may have pores and mass transport could occur both by convection (i.e., volumetric flow) as well as by diffusion assuming that the dense active layer of the membranes is composed of a nonporous structure with defects which induce volumetric flow through the membranes. In addition, the positron annihilation technique has revealed that the active layers can involve relatively uniform porous structures. As such, the assumption of a nonporous active layer in association with hydraulic pressure is questionable. To validate this assumption, we have tested FO membranes under the conditions where hydraulic and osmotic pressures are equivalent yet in opposite directions for water transport, namely the null-pressure condition. We have also established a practically valid characterization method which quantifies the vulnerability of the FO membranes to hydraulic pressure.

  19. Preschool Children Learn about Causal Structure from Conditional Interventions

    ERIC Educational Resources Information Center

    Schulz, Laura E.; Gopnik, Alison; Glymour, Clark

    2007-01-01

    The conditional intervention principle is a formal principle that relates patterns of interventions and outcomes to causal structure. It is a central assumption of experimental design and the causal Bayes net formalism. Two studies suggest that preschoolers can use the conditional intervention principle to distinguish causal chains, common cause…

  20. Cognition is … Fundamentally Cultural.

    PubMed

    Bender, Andrea; Beller, Sieghard

    2013-03-01

    A prevailing concept of cognition in psychology is inspired by the computer metaphor. Its focus on mental states that are generated and altered by information input, processing, storage and transmission invites a disregard for the cultural dimension of cognition, based on three (implicit) assumptions: cognition is internal, processing can be distinguished from content, and processing is independent of cultural background. Arguing against each of these assumptions, we point out how culture may affect cognitive processes in various ways, drawing on instances from numerical cognition, ethnobiological reasoning, and theory of mind. Given the pervasive cultural modulation of cognition-on all of Marr's levels of description-we conclude that cognition is indeed fundamentally cultural, and that consideration of its cultural dimension is essential for a comprehensive understanding.

  1. Statistical foundations of liquid-crystal theory: I. Discrete systems of rod-like molecules.

    PubMed

    Seguin, Brian; Fried, Eliot

    2012-12-01

    We develop a mechanical theory for systems of rod-like particles. Central to our approach is the assumption that the external power expenditure for any subsystem of rods is independent of the underlying frame of reference. This assumption is used to derive the basic balance laws for forces and torques. By considering inertial forces on par with other forces, these laws hold relative to any frame of reference, inertial or noninertial. Finally, we introduce a simple set of constitutive relations to govern the interactions between rods and find restrictions necessary and sufficient for these laws to be consistent with thermodynamics. Our framework provides a foundation for a statistical mechanical derivation of the macroscopic balance laws governing liquid crystals.

  2. The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that the quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The classical methods of estimating reliability are shown to lead to exhorbitant amounts of testing when applied to life-critical software. Reliability growth models are examined and also shown to be incapable of overcoming the need for excessive amounts of testing. The key assumption of software fault tolerance separately programmed versions fail independently is shown to be problematic. This assumption cannot be justified by experimentation in the ultrareliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multiversion software experiments support this affirmation.

  3. Linking parasite populations in hosts to parasite populations in space through Taylor's law and the negative binomial distribution

    PubMed Central

    Poulin, Robert; Lagrue, Clément

    2017-01-01

    The spatial distribution of individuals of any species is a basic concern of ecology. The spatial distribution of parasites matters to control and conservation of parasites that affect human and nonhuman populations. This paper develops a quantitative theory to predict the spatial distribution of parasites based on the distribution of parasites in hosts and the spatial distribution of hosts. Four models are tested against observations of metazoan hosts and their parasites in littoral zones of four lakes in Otago, New Zealand. These models differ in two dichotomous assumptions, constituting a 2 × 2 theoretical design. One assumption specifies whether the variance function of the number of parasites per host individual is described by Taylor's law (TL) or the negative binomial distribution (NBD). The other assumption specifies whether the numbers of parasite individuals within each host in a square meter of habitat are independent or perfectly correlated among host individuals. We find empirically that the variance–mean relationship of the numbers of parasites per square meter is very well described by TL but is not well described by NBD. Two models that posit perfect correlation of the parasite loads of hosts in a square meter of habitat approximate observations much better than two models that posit independence of parasite loads of hosts in a square meter, regardless of whether the variance–mean relationship of parasites per host individual obeys TL or NBD. We infer that high local interhost correlations in parasite load strongly influence the spatial distribution of parasites. Local hotspots could influence control and conservation of parasites. PMID:27994156

  4. High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole.

    PubMed

    Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei

    2018-01-05

    Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80  Gb×45.6  Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114  bits/s, with a failure probability less than 10^{-5}. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.

  5. CO Component Estimation Based on the Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  6. Essays on parametric and nonparametric modeling and estimation with applications to energy economics

    NASA Astrophysics Data System (ADS)

    Gao, Weiyu

    My dissertation research is composed of two parts: a theoretical part on semiparametric efficient estimation and an applied part in energy economics under different dynamic settings. The essays are related in terms of their applications as well as the way in which models are constructed and estimated. In the first essay, efficient estimation of the partially linear model is studied. We work out the efficient score functions and efficiency bounds under four stochastic restrictions---independence, conditional symmetry, conditional zero mean, and partially conditional zero mean. A feasible efficient estimation method for the linear part of the model is developed based on the efficient score. A battery of specification test that allows for choosing between the alternative assumptions is provided. A Monte Carlo simulation is also conducted. The second essay presents a dynamic optimization model for a stylized oilfield resembling the largest developed light oil field in Saudi Arabia, Ghawar. We use data from different sources to estimate the oil production cost function and the revenue function. We pay particular attention to the dynamic aspect of the oil production by employing petroleum-engineering software to simulate the interaction between control variables and reservoir state variables. Optimal solutions are studied under different scenarios to account for the possible changes in the exogenous variables and the uncertainty about the forecasts. The third essay examines the effect of oil price volatility on the level of innovation displayed by the U.S. economy. A measure of innovation is calculated by decomposing an output-based Malmquist index. We also construct a nonparametric measure for oil price volatility. Technical change and oil price volatility are then placed in a VAR system with oil price and a variable indicative of monetary policy. The system is estimated and analyzed for significant relationships. We find that oil price volatility displays a significant negative effect on innovation. A key point of this analysis lies in the fact that we impose no functional forms for technologies and the methods employed keep technical assumptions to a minimum.

  7. Latent class instrumental variables: A clinical and biostatistical perspective

    PubMed Central

    Baker, Stuart G.; Kramer, Barnett S.; Lindeman, Karen S.

    2015-01-01

    In some two-arm randomized trials, some participants receive the treatment assigned to the other arm as a result of technical problems, refusal of a treatment invitation, or a choice of treatment in an encouragement design. In some before-and-after studies, the availability of a new treatment changes from one time period to this next. Under assumptions that are often reasonable, the latent class instrumental variable (IV) method estimates the effect of treatment received in the aforementioned scenarios involving all-or-none compliance and all-or-none availability. Key aspects are four initial latent classes (sometimes called principal strata) based on treatment received if in each randomization group or time period, the exclusion restriction assumption (in which randomization group or time period is an instrumental variable), the monotonicity assumption (which drops an implausible latent class from the analysis), and the estimated effect of receiving treatment in one latent class (sometimes called efficacy, the local average treatment effect, or the complier average causal effect). Since its independent formulations in the biostatistics and econometrics literatures, the latent class IV method (which has no well-established name) has gained increasing popularity. We review the latent class IV method from a clinical and biostatistical perspective, focusing on underlying assumptions, methodological extensions, and applications in our fields of obstetrics and cancer research. PMID:26239275

  8. Early Warning Signals for Regime Transition in the Stable Boundary Layer: A Model Study

    NASA Astrophysics Data System (ADS)

    van Hooijdonk, I. G. S.; Moene, A. F.; Scheffer, M.; Clercx, H. J. H.; van de Wiel, B. J. H.

    2017-02-01

    The evening transition is investigated in an idealized model for the nocturnal boundary layer. From earlier studies it is known that the nocturnal boundary layer may manifest itself in two distinct regimes, depending on the ambient synoptic conditions: strong-wind or overcast conditions typically lead to weakly stable, turbulent nights; clear-sky and weak-wind conditions, on the other hand, lead to very stable, weakly turbulent conditions. Previously, the dynamical behaviour near the transition between these regimes was investigated in an idealized setting, relying on Monin-Obukhov (MO) similarity to describe turbulent transport. Here, we investigate a similar set-up, using direct numerical simulation; in contrast to MO-based models, this type of simulation does not need to rely on turbulence closure assumptions. We show that previous predictions are verified, but now independent of turbulence parametrizations. Also, it appears that a regime shift to the very stable state is signaled in advance by specific changes in the dynamics of the turbulent boundary layer. Here, we show how these changes may be used to infer a quantitative estimate of the transition point from the weakly stable boundary layer to the very stable boundary layer. In addition, it is shown that the idealized, nocturnal boundary-layer system shares important similarities with generic non-linear dynamical systems that exhibit critical transitions. Therefore, the presence of other, generic early warning signals is tested as well. Indeed, indications are found that such signals are present in stably stratified turbulent flows.

  9. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance.

    PubMed

    Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S

    2017-10-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.

  10. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  11. Unsaturation of vapour pressure inside leaves of two conifer species

    DOE PAGES

    Cernusak, Lucas A.; Ubierna, Nerea; Jenkins, Michael W.; ...

    2018-05-16

    Stomatal conductance (g s) impacts both photosynthesis and transpiration, and is therefore fundamental to the global carbon and water cycles, food production, and ecosystem services. Mathematical models provide the primary means of analysing this important leaf gas exchange parameter. A nearly universal assumption in such models is that the vapour pressure inside leaves (e i) remains saturated under all conditions. The validity of this assumption has not been well tested, because so far e i cannot be measured directly. Here, we test this assumption using a novel technique, based on coupled measurements of leaf gas exchange and the stable isotopemore » compositions of CO 2 and water vapour passing over the leaf. We applied this technique to mature individuals of two semiarid conifer species. In both species, e i routinely dropped below saturation when leaves were exposed to moderate to high air vapour pressure deficits. Typical values of relative humidity in the intercellular air spaces were as low 0.9 in Juniperus monosperma and 0.8 in Pinus edulis. These departures of e i from saturation caused significant biases in calculations of g s and the intercellular CO 2 concentration. Thus, our results refute the longstanding assumption of saturated vapour pressure in plant leaves under all conditions.« less

  12. Unsaturation of vapour pressure inside leaves of two conifer species

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cernusak, Lucas A.; Ubierna, Nerea; Jenkins, Michael W.

    Stomatal conductance (g s) impacts both photosynthesis and transpiration, and is therefore fundamental to the global carbon and water cycles, food production, and ecosystem services. Mathematical models provide the primary means of analysing this important leaf gas exchange parameter. A nearly universal assumption in such models is that the vapour pressure inside leaves (e i) remains saturated under all conditions. The validity of this assumption has not been well tested, because so far e i cannot be measured directly. Here, we test this assumption using a novel technique, based on coupled measurements of leaf gas exchange and the stable isotopemore » compositions of CO 2 and water vapour passing over the leaf. We applied this technique to mature individuals of two semiarid conifer species. In both species, e i routinely dropped below saturation when leaves were exposed to moderate to high air vapour pressure deficits. Typical values of relative humidity in the intercellular air spaces were as low 0.9 in Juniperus monosperma and 0.8 in Pinus edulis. These departures of e i from saturation caused significant biases in calculations of g s and the intercellular CO 2 concentration. Thus, our results refute the longstanding assumption of saturated vapour pressure in plant leaves under all conditions.« less

  13. Inseparability of Go and Stop in Inhibitory Control: Go Stimulus Discriminability Affects Stopping Behavior.

    PubMed

    Ma, Ning; Yu, Angela J

    2016-01-01

    Inhibitory control, the ability to stop or modify preplanned actions under changing task conditions, is an important component of cognitive functions. Two lines of models of inhibitory control have previously been proposed for human response in the classical stop-signal task, in which subjects must inhibit a default go response upon presentation of an infrequent stop signal: (1) the race model, which posits two independent go and stop processes that race to determine the behavioral outcome, go or stop; and (2) an optimal decision-making model, which posits that observers decides whether and when to go based on continually (Bayesian) updated information about both the go and stop stimuli. In this work, we probe the relationship between go and stop processing by explicitly manipulating the discrimination difficulty of the go stimulus. While the race model assumes the go and stop processes are independent, and therefore go stimulus discriminability should not affect the stop stimulus processing, we simulate the optimal model to show that it predicts harder go discrimination should result in longer go reaction time (RT), lower stop error rate, as well as faster stop-signal RT. We then present novel behavioral data that validate these model predictions. The results thus favor a fundamentally inseparable account of go and stop processing, in a manner consistent with the optimal model, and contradicting the independence assumption of the race model. More broadly, our findings contribute to the growing evidence that the computations underlying inhibitory control are systematically modulated by cognitive influences in a Bayes-optimal manner, thus opening new avenues for interpreting neural responses underlying inhibitory control.

  14. Cultural variation in the motivational standards of self-enhancement and self-criticism among bicultural Asian American and Anglo American students.

    PubMed

    Zusho, Akane

    2008-10-01

    Recent work on biculturalism has made theoretical and methodological inroads into our understanding of the relation of cultural processes with psychological functioning. Through the use of cultural priming methodologies, investigators have demonstrated that biculturals, or individuals who have experienced and identify with more than one culture, can switch between various "cultural frames of reference" in response to corresponding social cues (Hong, Morris, Chiu, & Benet-Martinez, 2000). Drawing on this work on the cognitive implications of biculturalism, the purpose of the present study was to examine the assumption that independent and interdependent self-construals are associated with the motivational standards of self-enhancement and self-criticism, respectively. More specifically, the effects of differential primes of self on ratings of self-enhancement were investigated in a sample of bicultural Asian American (N = 42) and Anglo American (N = 60) college students; overall, more similarities than differences were noted between the two groups. It was hypothesized that Anglo American students would display marked tendencies toward self-enhancement. However, this hypothesis was not supported. Nevertheless, consistent prime effects were observed for a selected number of ratings related to academic virtues, with those who received an independent-self prime often exhibiting greater self-enhancing tendencies than those who received an interdependent-self prime. For example, participants in the independent-self condition reported on average significantly higher ratings for self-discipline and initiative, as well as the degree to which they perceived themselves to be hard working. Implications for the work on self-representations, motivation, and acculturation are discussed.

  15. Partitioning uncertainty in streamflow projections under nonstationary model conditions

    NASA Astrophysics Data System (ADS)

    Chawla, Ila; Mujumdar, P. P.

    2018-02-01

    Assessing the impacts of Land Use (LU) and climate change on future streamflow projections is necessary for efficient management of water resources. However, model projections are burdened with significant uncertainty arising from various sources. Most of the previous studies have considered climate models and scenarios as major sources of uncertainty, but uncertainties introduced by land use change and hydrologic model assumptions are rarely investigated. In this paper an attempt is made to segregate the contribution from (i) general circulation models (GCMs), (ii) emission scenarios, (iii) land use scenarios, (iv) stationarity assumption of the hydrologic model, and (v) internal variability of the processes, to overall uncertainty in streamflow projections using analysis of variance (ANOVA) approach. Generally, most of the impact assessment studies are carried out with unchanging hydrologic model parameters in future. It is, however, necessary to address the nonstationarity in model parameters with changing land use and climate. In this paper, a regression based methodology is presented to obtain the hydrologic model parameters with changing land use and climate scenarios in future. The Upper Ganga Basin (UGB) in India is used as a case study to demonstrate the methodology. The semi-distributed Variable Infiltration Capacity (VIC) model is set-up over the basin, under nonstationary conditions. Results indicate that model parameters vary with time, thereby invalidating the often-used assumption of model stationarity. The streamflow in UGB under the nonstationary model condition is found to reduce in future. The flows are also found to be sensitive to changes in land use. Segregation results suggest that model stationarity assumption and GCMs along with their interactions with emission scenarios, act as dominant sources of uncertainty. This paper provides a generalized framework for hydrologists to examine stationarity assumption of models before considering them for future streamflow projections and segregate the contribution of various sources to the uncertainty.

  16. A solution to Schroder's equation in several variables

    DOE PAGES

    Bridges, Robert A.

    2016-03-04

    For this paper, let φ be an analytic self-map of the n -ball, having 0 as the attracting fixed point and having full-rank near 0. We consider the generalized Schroder's equation, F °φ=φ'(0) kF with ka positive integer and prove there is always a solution F with linearly independent component functions, but that such an F cannot have full rank except possibly when k=1. Furthermore, when k=1 (Schroder's equation), necessary and sufficient conditions on φ are given to ensure F has full rank near 0 without the added assumption of diagonalizability as needed in the 2003 Cowen/MacCluer paper. In responsemore » to Enoch's 2007 paper, it is proven that any formal power series solution indeed represents an analytic function on the whole unit ball. Finally, how exactly resonance can lead to an obstruction of a full rank solution is discussed as well as some consequences of having solutions to Schroder's equation.« less

  17. Additional extensions to the NASCAP computer code, volume 3

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Cooke, D. L.

    1981-01-01

    The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.

  18. The “2T” ion-electron semi-analytic shock solution for code-comparison with xRAGE: A report for FY16

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, Jim Michael

    2016-10-05

    This report documents an effort to generate the semi-analytic "2T" ion-electron shock solution developed in the paper by Masser, Wohlbier, and Lowrie, and the initial attempts to understand how to use this solution as a code-verification tool for one of LANL's ASC codes, xRAGE. Most of the work so far has gone into generating the semi-analytic solution. Considerable effort will go into understanding how to write the xRAGE input deck that both matches the boundary conditions imposed by the solution, and also what physics models must be implemented within the semi-analytic solution itself to match the model assumptions inherit withinmore » xRAGE. Therefore, most of this report focuses on deriving the equations for the semi-analytic 1D-planar time-independent "2T" ion-electron shock solution, and is written in a style that is intended to provide clear guidance for anyone writing their own solver.« less

  19. n l -> n' l' transition rates in electron and proton - Rydberg atom collision

    NASA Astrophysics Data System (ADS)

    Vrinceanu, Daniel

    2017-04-01

    Electrons and protons drive the recombination dynamics of highly excited Rydberg atoms in cold rarefied plasmas found in astrophysical conditions such as primordial recombination or star formation in H-II clouds. It has been recognized that collisions induce both energy and angular momentum transitions in Rydberg atoms, although in different proportions, depending on the initial state, temperature and the given species considered in the collision (electron or proton). Most studies focused on one collision type at a time, under the assumption that collision types are independent or their effects are not competing. The classical Monte-Carlo trajectory simulations presented in this work calculate the rates for both energy and angular momentum transfers and show their interdependence. For example, energy transfer with small angular momentum change are more efficient for target states with initial large angular momentum. The author acknowledges support received from the National Science Foundation through a Grant for the Center for Research on Complex Networks (HRD-1137732).

  20. Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures

    NASA Astrophysics Data System (ADS)

    Li, Quanbao; Wei, Fajie; Zhou, Shenghan

    2017-05-01

    The linear discriminant analysis (LDA) is one of popular means for linear feature extraction. It usually performs well when the global data structure is consistent with the local data structure. Other frequently-used approaches of feature extraction usually require linear, independence, or large sample condition. However, in real world applications, these assumptions are not always satisfied or cannot be tested. In this paper, we introduce an adaptive method, local kernel nonparametric discriminant analysis (LKNDA), which integrates conventional discriminant analysis with nonparametric statistics. LKNDA is adept in identifying both complex nonlinear structures and the ad hoc rule. Six simulation cases demonstrate that LKNDA have both parametric and nonparametric algorithm advantages and higher classification accuracy. Quartic unilateral kernel function may provide better robustness of prediction than other functions. LKNDA gives an alternative solution for discriminant cases of complex nonlinear feature extraction or unknown feature extraction. At last, the application of LKNDA in the complex feature extraction of financial market activities is proposed.

  1. Improving Naive Bayes with Online Feature Selection for Quick Adaptation to Evolving Feature Usefulness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pon, R K; Cardenas, A F; Buttler, D J

    The definition of what makes an article interesting varies from user to user and continually evolves even for a single user. As a result, for news recommendation systems, useless document features can not be determined a priori and all features are usually considered for interestingness classification. Consequently, the presence of currently useless features degrades classification performance [1], particularly over the initial set of news articles being classified. The initial set of document is critical for a user when considering which particular news recommendation system to adopt. To address these problems, we introduce an improved version of the naive Bayes classifiermore » with online feature selection. We use correlation to determine the utility of each feature and take advantage of the conditional independence assumption used by naive Bayes for online feature selection and classification. The augmented naive Bayes classifier performs 28% better than the traditional naive Bayes classifier in recommending news articles from the Yahoo! RSS feeds.« less

  2. Uses and misuses of Bayes' rule and Bayesian classifiers in cybersecurity

    NASA Astrophysics Data System (ADS)

    Bard, Gregory V.

    2017-12-01

    This paper will discuss the applications of Bayes' Rule and Bayesian Classifiers in Cybersecurity. While the most elementary form of Bayes' rule occurs in undergraduate coursework, there are more complicated forms as well. As an extended example, Bayesian spam filtering is explored, and is in many ways the most triumphant accomplishment of Bayesian reasoning in computer science, as nearly everyone with an email address has a spam folder. Bayesian Classifiers have also been responsible significant cybersecurity research results; yet, because they are not part of the standard curriculum, few in the mathematics or information-technology communities have seen the exact definitions, requirements, and proofs that comprise the subject. Moreover, numerous errors have been made by researchers (described in this paper), due to some mathematical misunderstandings dealing with conditional independence, or other badly chosen assumptions. Finally, to provide instructors and researchers with real-world examples, 25 published cybersecurity papers that use Bayesian reasoning are given, with 2-4 sentence summaries of the focus and contributions of each paper.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Adam J.; Fox, Patrick J.; Kahn, Yonatan

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the m{sub χ}−σ{sub n} plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the v{sub min}−g-tilde plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from v{sub min} to nuclear recoil momentum (p{submore » R}), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call h-til-tilde(p{sub R}). The entire family of conventional halo-independent g-tilde(v{sub min}) plots for all DM masses are directly found from the single h-tilde(p{sub R}) plot through a simple rescaling of axes. By considering results in h-tilde(p{sub R}) space, one can determine if two experiments are inconsistent for all masses and all physically possible halos, or for what range of dark matter masses the results are inconsistent for all halos, without the necessity of multiple g-tilde(v{sub min}) plots for different DM masses. We conduct a sample analysis comparing the CDMS II Si events to the null results from LUX, XENON10, and SuperCDMS using our method and discuss how the results can be strengthened by imposing the physically reasonable requirement of a finite halo escape velocity.« less

  4. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark; Bacon, John

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.

  5. A pseudo-sequential choice model for valuing multi-attribute environmental policies or programs in contingent valuation applications

    Treesearch

    Dmitriy Volinskiy; John C Bergstrom; Christopher M Cornwell; Thomas P Holmes

    2010-01-01

    The assumption of independence of irrelevant alternatives in a sequential contingent valuation format should be questioned. Statistically, most valuation studies treat nonindependence as a consequence of unobserved individual effects. Another approach is to consider an inferential process in which any particular choice is part of a general choosing strategy of a survey...

  6. Teachers Who Initiate Changes with an Ebook-Integrated Curriculum: Revisiting the Developmental Assumptions of Stages of Concerns in the Concerns-Based Adoption Model

    ERIC Educational Resources Information Center

    Min, Mina

    2017-01-01

    Understanding teachers' concerns about integrating new technology to their curricular practices is essential for the improvement of pedagogical practices. Using in-depth interviews, this study aims (1) to explore and describe concerns of teachers who attempted to initiate ebook-integrated curriculum independently in a higher education setting, and…

  7. Latent Trait Theory in the Affective Domain--Applications of the Rasch Model.

    ERIC Educational Resources Information Center

    Curry, Allen R.; Riegel, N. Blyth

    The Rasch model of test theory is described in general terms, compared with latent trait theory, and shown to have interesting applications for the measurement of affective as well as cognitive traits. Three assumption of the Rasch model are stated to support the conclusion that calibration of the items and tests is independent of the examinee…

  8. A Hedonic Approach to Estimating Software Cost Using Ordinary Least Squares Regression and Nominal Attribute Variables

    DTIC Science & Technology

    2006-03-01

    included zero, there is insufficient evidence to indicate that the error mean is 35 not zero. The Breusch - Pagan test was used to test the constant...Multicollinearity .............................................................................. 33 Testing OLS Assumptions...programming styles used by developers (Stamelos and others, 2003:733). Kemerer tested to see how models utilizing SLOC as an independent variable

  9. Modern Media and Instructional Technology in Vocational Education: Some Experiences of the Diffusion of New Technology in the Adult Education Institutions.

    ERIC Educational Resources Information Center

    Maki-Komsi, Saija; Ropo, Eero

    A study investigated teachers' experiences of educational and cultural change in adult education institutions. The initial assumption of the study was that the institutional shift from traditional teacher-dominated, face-to-face teaching to networked teamwork that emphasizes students' independence is not only a methodological or technological…

  10. Convergence between Measures of Work-to-Family and Family-to-Work Conflict: A Meta-Analytic Examination

    ERIC Educational Resources Information Center

    Mesmer-Magnus, Jessica R.; Viswesvaran, Chockalingam

    2005-01-01

    The overlap between measures of work-to-family (WFC) and family-to-work conflict (FWC) was meta-analytically investigated. Researchers have assumed WFC and FWC to be distinct, however, this assumption requires empirical verification. Across 25 independent samples (total N=9079) the sample size weighted mean observed correlation was .38 and the…

  11. "I'm Not Going to Be a Girl": Masculinity and Emotions in Boys' Friendships and Peer Groups

    ERIC Educational Resources Information Center

    Oransky, Matthew; Marecek, Jeanne

    2009-01-01

    This study examines the peer relations and emotion practices of adolescent boys in light of their expectations and assumptions about masculinity. We carried out semistructured interviews with middle-class and upper-middle-class boys from an independent high school. The boys reported that they assiduously avoided displays of emotional or physical…

  12. Inferences about unobserved causes in human contingency learning.

    PubMed

    Hagmayer, York; Waldmann, Michael R

    2007-03-01

    Estimates of the causal efficacy of an event need to take into account the possible presence and influence of other unobserved causes that might have contributed to the occurrence of the effect. Current theoretical approaches deal differently with this problem. Associative theories assume that at least one unobserved cause is always present. In contrast, causal Bayes net theories (including Power PC theory) hypothesize that unobserved causes may be present or absent. These theories generally assume independence of different causes of the same event, which greatly simplifies modelling learning and inference. In two experiments participants were requested to learn about the causal relation between a single cause and an effect by observing their co-occurrence (Experiment 1) or by actively intervening in the cause (Experiment 2). Participants' assumptions about the presence of an unobserved cause were assessed either after each learning trial or at the end of the learning phase. The results show an interesting dissociation. Whereas there was a tendency to assume interdependence of the causes in the online judgements during learning, the final judgements tended to be more in the direction of an independence assumption. Possible explanations and implications of these findings are discussed.

  13. Cosmic transparency and acceleration

    NASA Astrophysics Data System (ADS)

    Holanda, R. F. L.; Pereira, S. H.; Jain, Deepak

    2018-01-01

    In this paper, by considering an absorption probability independent of photon wavelength, we show that current type Ia supernovae (SNe Ia) and gamma-ray burst (GRB) observations plus high-redshift measurements of the cosmic microwave background (CMB) radiation temperature support cosmic acceleration regardless of the transparent-universe assumption. Two flat scenarios are considered in our analyses: the Λ CDM model and a kinematic model. We consider τ (z )=2 ln (1 +z )ɛ, where τ (z ) denotes the opacity between an observer at z =0 and a source at z . This choice is equivalent to deforming the cosmic distance duality relation as DLDA-1=(1 +z )2+ɛ and, if the absorption probability is independent of photon wavelength, the CMB temperature evolution law is TCMB(z )=T0(1 +z )1+2 ɛ /3. By marginalizing on the ɛ parameter, our analyses rule out a decelerating universe at 99.99% C.L. for all scenarios considered. Interestingly, by considering only SNe Ia and GRBs observations, we obtain that a decelerated universe—indicated by ΩΛ≤0.33 and q0>0 —is ruled out around 1.5 σ C.L. and 2 σ C.L., respectively, regardless of the transparent-universe assumption.

  14. The kappa statistic in rehabilitation research: an examination.

    PubMed

    Tooth, Leigh R; Ottenbacher, Kenneth J

    2004-08-01

    The number and sophistication of statistical procedures reported in medical rehabilitation research is increasing. Application of the principles and methods associated with evidence-based practice has contributed to the need for rehabilitation practitioners to understand quantitative methods in published articles. Outcomes measurement and determination of reliability are areas that have experienced rapid change during the past decade. In this study, distinctions between reliability and agreement are examined. Information is presented on analytical approaches for addressing reliability and agreement with the focus on the application of the kappa statistic. The following assumptions are discussed: (1) kappa should be used with data measured on a categorical scale, (2) the patients or objects categorized should be independent, and (3) the observers or raters must make their measurement decisions and judgments independently. Several issues related to using kappa in measurement studies are described, including use of weighted kappa, methods of reporting kappa, the effect of bias and prevalence on kappa, and sample size and power requirements for kappa. The kappa statistic is useful for assessing agreement among raters, and it is being used more frequently in rehabilitation research. Correct interpretation of the kappa statistic depends on meeting the required assumptions and accurate reporting.

  15. M2 Internal Tides and Their Observed Wavenumber Spectra from Satellite Altimetry*

    NASA Technical Reports Server (NTRS)

    Ray, R. D.; Zaron, E. D.

    2015-01-01

    A near-global chart of surface elevations associated with the stationary M2 internal tide is empirically constructed from multi-mission satellite altimeter data. An advantage of a strictly empirical mapping approach is that results are independent of assumptions about ocean wave dynamics and, in fact, can be used to test such assumptions. A disadvantage is that present-day altimeter coverage is only marginally adequate to support mapping such short-wavelength features. Moreover, predominantly north-south ground-track orientations and contamination from nontidal oceanographic variability can lead to deficiencies in mapped tides. Independent data from Cryosphere Satellite-2 (CryoSat-2) and other altimeters are used to test the solutions and show positive reduction in variance except in regions of large mesoscale variability. The tidal fields are subjected to two-dimensional wavenumber spectral analysis, which allows for the construction of an empirical map of modal wavelengths. Mode-1 wavelengths show good agreement with theoretical wavelengths calculated from the ocean's mean stratification, with a few localized exceptions (e.g., Tasman Sea). Mode-2 waves are detectable in much of the ocean, with wavelengths in reasonable agreement with theoretical expectations, but their spectral signatures grow too weak to map in some regions.

  16. Truth-Conditionally Inert Aspects of Content

    ERIC Educational Resources Information Center

    Gordon, David A.

    2009-01-01

    Against a background of disagreement about what sorts of things linguistic contents are, many philosophers of language share the assumption that they're cut only as finely as the conditions under which they are true. This includes many theorists who would reject the program known as "truth-conditional semantics". I argue that this point of…

  17. Optimized velocity distributions for direct dark matter detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibarra, Alejandro; Rappelt, Andreas, E-mail: ibarra@tum.de, E-mail: andreas.rappelt@tum.de

    We present a method to calculate, without making assumptions about the local dark matter velocity distribution, the maximal and minimal number of signal events in a direct detection experiment given a set of constraints from other direct detection experiments and/or neutrino telescopes. The method also allows to determine the velocity distribution that optimizes the signal rates. We illustrate our method with three concrete applications: i) to derive a halo-independent upper limit on the cross section from a set of null results, ii) to confront in a halo-independent way a detection claim to a set of null results and iii) tomore » assess, in a halo-independent manner, the prospects for detection in a future experiment given a set of current null results.« less

  18. Clauser-Horne-Shimony-Holt versus three-party pseudo-telepathy: on the optimal number of samples in device-independent quantum private query

    NASA Astrophysics Data System (ADS)

    Basak, Jyotirmoy; Maitra, Subhamoy

    2018-04-01

    In device-independent (DI) paradigm, the trustful assumptions over the devices are removed and CHSH test is performed to check the functionality of the devices toward certifying the security of the protocol. The existing DI protocols consider infinite number of samples from theoretical point of view, though this is not practically implementable. For finite sample analysis of the existing DI protocols, we may also consider strategies for checking device independence other than the CHSH test. In this direction, here we present a comparative analysis between CHSH and three-party Pseudo-telepathy game for the quantum private query protocol in DI paradigm that appeared in Maitra et al. (Phys Rev A 95:042344, 2017) very recently.

  19. Standard rulers, candles, and clocks from the low-redshift universe.

    PubMed

    Heavens, Alan; Jimenez, Raul; Verde, Licia

    2014-12-12

    We measure the length of the baryon acoustic oscillation (BAO) feature, and the expansion rate of the recent Universe, from low-redshift data only, almost model independently. We make only the following minimal assumptions: homogeneity and isotropy, a metric theory of gravity, a smooth expansion history, and the existence of standard candles (supernovæ) and a standard BAO ruler. The rest is determined by the data, which are compilations of recent BAO and type IA supernova results. Making only these assumptions, we find for the first time that the standard ruler has a length of 103.9±2.3h⁻¹ Mpc. The value is a measurement, in contrast to the model-dependent theoretical prediction determined with model parameters set by Planck data (99.3±2.1h⁻¹ Mpc). The latter assumes the cold dark matter model with a cosmological constant, and that the ruler is the sound horizon at radiation drag. Adding passive galaxies as standard clocks or a local Hubble constant measurement allows the absolute BAO scale to be determined (142.8±3.7 Mpc), and in the former case the additional information makes the BAO length determination more precise (101.9±1.9h⁻¹ Mpc). The inverse curvature radius of the Universe is weakly constrained and consistent with zero, independently of the gravity model, provided it is metric. We find the effective number of relativistic species to be N(eff)=3.53±0.32, independent of late-time dark energy or gravity physics.

  20. The Effect of Mounting Vortex Generators on the DTU 10MW Reference Wind Turbine Blade

    NASA Astrophysics Data System (ADS)

    Skrzypiński, Witold; Gaunaa, Mac; Bak, Christian

    2014-06-01

    The aim of the current work is to analyze possible advantages of mounting Vortex Generators (VG's) on a wind turbine blade. Specifically, the project aims at investigating at which radial sections of the DTU 10 MW Reference Wind Turbine blade it is most beneficial to mount the VG's in order to increase the Annual Energy Production (AEP) under realistic conditions. The present analysis was carried out in several steps: (1) The clean two dimensional airfoil characteristics were first modified to emulate the effect of all possible combinations of VG's (1% high at suction side x/c=0.2-0.25) and two Leading Edge Roughness (LER) values along the whole blade span. (2) The combinations from Step 1, including the clean case were subsequently modified to take into account three dimensional effects. (3) BEM computations were carried out to determine the aerodynamic rotor performance using each of the datasets from Step 2 along the whole blade span for all wind speeds in the turbine control scheme. (4) Employing the assumption of radial independence between sections of the blades, and using the results of the BEM computations described in Step 3, it is possible to determine for each radial position independently whether it is beneficial to install VG's in the smooth and LER cases, respectively. The results indicated that surface roughness that corresponds to degradation of the power curve may to some extent be mitigated by installation of VG's. The present results also indicated that the optimal VG configuration in terms of maximizing AEP depends on the degree of severity of the LER. This is because, depending on the condition of blade surface, installation of VG's on an incorrect blade span or installation of VG's too far out on the blade may cause loss in AEP. The results also indicated that the worse condition of the blade surface, the more gain may be obtained from the installation of VG's.

  1. Particle precipitation: How the spectrum fit impacts atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Wissing, J. M.; Nieder, H.; Yakovchouk, O. S.; Sinnhuber, M.

    2016-11-01

    Particle precipitation causes atmospheric ionization. Modeled ionization rates are widely used in atmospheric chemistry/climate simulations of the upper atmosphere. As ionization rates are based on particle measurements some assumptions concerning the energy spectrum are required. While detectors measure particles binned into certain energy ranges only, the calculation of a ionization profile needs a fit for the whole energy spectrum. Therefore the following assumptions are needed: (a) fit function (e.g. power-law or Maxwellian), (b) energy range, (c) amount of segments in the spectral fit, (d) fixed or variable positions of intersections between these segments. The aim of this paper is to quantify the impact of different assumptions on ionization rates as well as their consequences for atmospheric chemistry modeling. As the assumptions about the particle spectrum are independent from the ionization model itself the results of this paper are not restricted to a single ionization model, even though the Atmospheric Ionization Module OSnabrück (AIMOS, Wissing and Kallenrode, 2009) is used here. We include protons only as this allows us to trace changes in the chemistry model directly back to the different assumptions without the need to interpret superposed ionization profiles. However, since every particle species requires a particle spectrum fit with the mentioned assumptions the results are generally applicable to all precipitating particles. The reader may argue that the selection of assumptions of the particle fit is of minor interest, but we would like to emphasize on this topic as it is a major, if not the main, source of discrepancies between different ionization models (and reality). Depending on the assumptions single ionization profiles may vary by a factor of 5, long-term calculations may show systematic over- or underestimation in specific altitudes and even for ideal setups the definition of the energy-range involves an intrinsic 25% uncertainty for the ionization rates. The effects on atmospheric chemistry (HOx, NOx and Ozone) have been calculated by 3dCTM, showing that the spectrum fit is responsible for a 8% variation in Ozone between setups, and even up to 50% for extreme setups.

  2. Cure models for estimating hospital-based breast cancer survival.

    PubMed

    Rama, Ranganathan; Swaminathan, Rajaraman; Venkatesan, Perumal

    2010-01-01

    Research on cancer survival is enriched by development and application of innovative analytical approaches in relation to standard methods. The aim of the present paper is to document the utility of a mixture model to estimate the cure fraction and compare it with other approaches. The data were for 1,107 patients with locally advanced breast cancer, who completed the neo-adjuvant treatment protocol during 1990-99 at the Cancer Institute (WIA), Chennai, India. Tumour stage, post-operative pathological node (PN) and tumour residue (TR) status were studied. Event free survival probability was estimated using the Kaplan-Meier method. Cure models under proportional and non-proportional hazard assumptions following log normal distribution for survival time were used to estimate both the cure fraction and the survival function for the uncured. Event free survival at 5 and 10 years were 64.2% and 52.6% respectively and cure fraction was 47.5% for all cases together. Follow up ranged between 0-15 years and survival probabilities showed minimal changes after 7 years of follow up. TR and PN emerged as independent prognostic factors using Cox and proportional hazard (PH) cure models. Proportionality condition was violated when tumour stage was considered and it was statistically significant only under PH and not under non PH cure models. However, TR and PN continued to be independent prognostic factors after adjusting for tumour stage using the non PH cure model. A consistent ordering of cure fractions with respect to factors of PN and TR was forthcoming across tumour stages using PH and non PH cure models, but perceptible differences in survival were observed between the two. If PH conditions are violated, analysis using a non PH model is advocated and mixture cure models are useful in estimating the cure fraction and constructing survival curves for non-cures.

  3. Density profiles in the Scrape-Off Layer interpreted through filament dynamics

    NASA Astrophysics Data System (ADS)

    Militello, Fulvio

    2017-10-01

    We developed a new theoretical framework to clarify the relation between radial Scrape-Off Layer density profiles and the fluctuations that generate them. The framework provides an interpretation of the experimental features of the profiles and of the turbulence statistics on the basis of simple properties of the filaments, such as their radial motion and their draining towards the divertor. L-mode and inter-ELM filaments are described as a Poisson process in which each event is independent and modelled with a wave function of amplitude and width statistically distributed according to experimental observations and evolving according to fluid equations. We will rigorously show that radially accelerating filaments, less efficient parallel exhaust and also a statistical distribution of their radial velocity can contribute to induce flatter profiles in the far SOL and therefore enhance plasma-wall interactions. A quite general result of our analysis is the resiliency of this non-exponential nature of the profiles and the increase of the relative fluctuation amplitude towards the wall, as experimentally observed. According to the framework, profile broadening at high fueling rates can be caused by interactions with neutrals (e.g. charge exchange) in the divertor or by a significant radial acceleration of the filaments. The framework assumptions were tested with 3D numerical simulations of seeded SOL filaments based on a two fluid model. In particular, filaments interact through the electrostatic field they generate only when they are in close proximity (separation comparable to their width in the drift plane), thus justifying our independence hypothesis. In addition, we will discuss how isolated filament motion responds to variations in the plasma conditions, and specifically divertor conditions. Finally, using the theoretical framework we will reproduce and interpret experimental results obtained on JET, MAST and HL-2A.

  4. Evolution of Requirements and Assumptions for Future Exploration Missions

    NASA Technical Reports Server (NTRS)

    Anderson, Molly; Sargusingh, Miriam; Perry, Jay

    2017-01-01

    NASA programs are maturing technologies, systems, and architectures to enabling future exploration missions. To increase fidelity as technologies mature, developers must make assumptions that represent the requirements of a future program. Multiple efforts have begun to define these requirements, including team internal assumptions, planning system integration for early demonstrations, and discussions between international partners planning future collaborations. For many detailed life support system requirements, existing NASA documents set limits of acceptable values, but a future vehicle may be constrained in other ways, and select a limited range of conditions. Other requirements are effectively set by interfaces or operations, and may be different for the same technology depending on whether the hard-ware is a demonstration system on the International Space Station, or a critical component of a future vehicle. This paper highlights key assumptions representing potential life support requirements and explanations of the driving scenarios, constraints, or other issues that drive them.

  5. Use of Climate Information for Decision-Making and Impacts Research: State of Our Understanding

    DTIC Science & Technology

    2016-03-01

    SUMMARY Much of human society and its infrastructure has been designed and built on a key assumption: that future climate conditions at any given...experienced in the past. This assumption affects infrastructure design and maintenance, emergency response management, and long-term investment and planning...our scientific understanding of the climate system in a manner that incorporates user needs into the design of scientific experiments, and that

  6. Development and Validation of Methodology to Model Flow in Ventilation Systems Commonly Found in Nuclear Facilities. Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strons, Philip; Bailey, James L.; Davis, John

    2016-03-01

    In this work, we apply the CFD in modeling airflow and particulate transport. This modeling is then compared to field validation studies to both inform and validate the modeling assumptions. Based on the results of field tests, modeling assumptions and boundary conditions are refined and the process is repeated until the results are found to be reliable with a high level of confidence.

  7. A possible loophole in the theorem of Bell.

    PubMed

    Hess, K; Philipp, W

    2001-12-04

    The celebrated inequalities of Bell are based on the assumption that local hidden parameters exist. When combined with conflicting experimental results, these inequalities appear to prove that local hidden parameters cannot exist. This contradiction suggests to many that only instantaneous action at a distance can explain the Einstein, Podolsky, and Rosen type of experiments. We show that, in addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions that contribute to his being able to obtain the desired contradiction. For instance, Bell assumes that the hidden parameters do not depend on time and are governed by a single probability measure independent of the analyzer settings. We argue that the exclusion of time has neither a physical nor a mathematical basis but is based on Bell's translation of the concept of Einstein locality into the language of probability theory. Our additional set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does not permit Bell-type proofs to go forward.

  8. Can we predict ectotherm responses to climate change using thermal performance curves and body temperatures?

    PubMed

    Sinclair, Brent J; Marshall, Katie E; Sewell, Mary A; Levesque, Danielle L; Willett, Christopher S; Slotsbo, Stine; Dong, Yunwei; Harley, Christopher D G; Marshall, David J; Helmuth, Brian S; Huey, Raymond B

    2016-11-01

    Thermal performance curves (TPCs), which quantify how an ectotherm's body temperature (T b ) affects its performance or fitness, are often used in an attempt to predict organismal responses to climate change. Here, we examine the key - but often biologically unreasonable - assumptions underlying this approach; for example, that physiology and thermal regimes are invariant over ontogeny, space and time, and also that TPCs are independent of previously experienced T b. We show how a critical consideration of these assumptions can lead to biologically useful hypotheses and experimental designs. For example, rather than assuming that TPCs are fixed during ontogeny, one can measure TPCs for each major life stage and incorporate these into stage-specific ecological models to reveal the life stage most likely to be vulnerable to climate change. Our overall goal is to explicitly examine the assumptions underlying the integration of TPCs with T b , to develop a framework within which empiricists can place their work within these limitations, and to facilitate the application of thermal physiology to understanding the biological implications of climate change. © 2016 John Wiley & Sons Ltd/CNRS.

  9. Classical Causal Models for Bell and Kochen-Specker Inequality Violations Require Fine-Tuning

    NASA Astrophysics Data System (ADS)

    Cavalcanti, Eric G.

    2018-04-01

    Nonlocality and contextuality are at the root of conceptual puzzles in quantum mechanics, and they are key resources for quantum advantage in information-processing tasks. Bell nonlocality is best understood as the incompatibility between quantum correlations and the classical theory of causality, applied to relativistic causal structure. Contextuality, on the other hand, is on a more controversial foundation. In this work, I provide a common conceptual ground between nonlocality and contextuality as violations of classical causality. First, I show that Bell inequalities can be derived solely from the assumptions of no signaling and no fine-tuning of the causal model. This removes two extra assumptions from a recent result from Wood and Spekkens and, remarkably, does not require any assumption related to independence of measurement settings—unlike all other derivations of Bell inequalities. I then introduce a formalism to represent contextuality scenarios within causal models and show that all classical causal models for violations of a Kochen-Specker inequality require fine-tuning. Thus, the quantum violation of classical causality goes beyond the case of spacelike-separated systems and already manifests in scenarios involving single systems.

  10. Literature Reviews on Modeling Internal Geometry of Textile Composites and Rate-Independent Continuum Damage

    NASA Technical Reports Server (NTRS)

    Su-Yuen, Hsu

    2011-01-01

    Textile composite materials have good potential for constructing composite structures where the effects of three-dimensional stresses are critical or geometric complexity is a manufacturing concern. There is a recent interest in advancing competence within Langley Research Center for modeling the degradation of mechanical properties of textile composites. In an initial effort, two critical areas are identified to pursue: (1) Construction of internal geometry of textile composites, and (2) Rate-independent continuum damage mechanics. This report documents reviews on the two subjects. Various reviewed approaches are categorized, their assumptions, methods, and progress are briefed, and then critiques are presented. Each review ends with recommended research.

  11. Certified randomness in quantum physics.

    PubMed

    Acín, Antonio; Masanes, Lluis

    2016-12-07

    The concept of randomness plays an important part in many disciplines. On the one hand, the question of whether random processes exist is fundamental for our understanding of nature. On the other, randomness is a resource for cryptography, algorithms and simulations. Standard methods for generating randomness rely on assumptions about the devices that are often not valid in practice. However, quantum technologies enable new methods for generating certified randomness, based on the violation of Bell inequalities. These methods are referred to as device-independent because they do not rely on any modelling of the devices. Here we review efforts to design device-independent randomness generators and the associated challenges.

  12. Beyond the "History of Ideas": The Issue of the "Ideological Origins of the Revolutions of Independence" Revisited.

    PubMed

    Palti, Elías

    2018-01-01

    This paper analyzes how Latin American historiography has addressed the issue of "the ideological origins of the revolution of independence," and how the formulation of that topic implies assumptions proper to the tradition of the history of ideas and leads to anachronistic conceptual transpositions. Halperín Donghi's work models a different approach, illuminating how a series of meaningful torsions within traditional languages provided the ideological framework for a result incompatible with those languages. This paradox forces a break with the frameworks of the history of ideas and the set of antinomies intrinsic to them, such as that between "tradition" and "modernity."

  13. Unbiased split variable selection for random survival forests using maximally selected rank statistics.

    PubMed

    Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas

    2017-04-15

    The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Perceptions of Psychological Abuse: The Role of Perpetrator Gender, Victim's Response, and Sexism.

    PubMed

    Capezza, Nicole M; D'Intino, Lauren A; Flynn, Margaret A; Arriaga, Ximena B

    2017-11-01

    It is commonly assumed that male abuse is more damaging than female abuse, just as it previously has been assumed that physical abuse is more harmful than psychological abuse. We sought to examine gender assumptions given that they may cause people to overlook the harm that men experience with a psychologically abusive partner. The current experiment compared perceptions of male and female perpetrators of psychological abuse, and examined whether gendered perceptions were affected by sexist beliefs or participants' own sex. The experiment also explored the effect of the victim's response to a perpetrator's abuse. College participants ( N = 195) read a scenario depicting a hypothetical marital conflict that manipulated the sex of the perpetrator, the level of abuse (abuse or no abuse), and whether the victim did or did not respond with some aggression. In scenarios that featured abuse (relative to no-abuse conditions), a male perpetrator was consistently perceived more harshly than a female perpetrator. Participant sex and sexism did not moderate this gender-based perception. Varying the victim's response in the scenario affected perceptions more in the no-abuse condition than in the abuse condition. The findings are discussed in terms of robust gender assumptions and the difficulties in challenging such assumptions.

  15. Experimental measurement of binding energy, selectivity, and allostery using fluctuation theorems.

    PubMed

    Camunas-Soler, Joan; Alemany, Anna; Ritort, Felix

    2017-01-27

    Thermodynamic bulk measurements of binding reactions rely on the validity of the law of mass action and the assumption of a dilute solution. Yet, important biological systems such as allosteric ligand-receptor binding, macromolecular crowding, or misfolded molecules may not follow these assumptions and may require a particular reaction model. Here we introduce a fluctuation theorem for ligand binding and an experimental approach using single-molecule force spectroscopy to determine binding energies, selectivity, and allostery of nucleic acids and peptides in a model-independent fashion. A similar approach could be used for proteins. This work extends the use of fluctuation theorems beyond unimolecular folding reactions, bridging the thermodynamics of small systems and the basic laws of chemical equilibrium. Copyright © 2017, American Association for the Advancement of Science.

  16. Testing the mean for dependent business data.

    PubMed

    Liang, Jiajuan; Martin, Linda

    2008-01-01

    In business data analysis, it is well known that the comparison of several means is usually carried out by the F-test in analysis of variance under the assumption of independently collected data from all populations. This assumption, however, is likely to be violated in survey data collected from various questionnaires or time-series data. As a result, it is not justifiable or problematic to apply the traditional F-test to comparison of dependent means directly. In this article, we develop a generalized F-test for comparing population means with dependent data. Simulation studies show that the proposed test has a simple approximate null distribution and feasible finite-sample properties. Applications of the proposed test in analysis of survey data and time-series data are illustrated by two real datasets.

  17. On the fluctuations of sums of independent random variables.

    PubMed

    Feller, W

    1969-07-01

    If X(1), X(2),... are independent random variables with zero expectation and finite variances, the cumulative sums S(n) are, on the average, of the order of magnitude S(n), where S(n) (2) = E(S(n) (2)). The occasional maxima of the ratios S(n)/S(n) are surprisingly large and the problem is to estimate the extent of their probable fluctuations.Specifically, let S(n) (*) = (S(n) - b(n))/a(n), where {a(n)} and {b(n)}, two numerical sequences. For any interval I, denote by p(I) the probability that the event S(n) (*) epsilon I occurs for infinitely many n. Under mild conditions on {a(n)} and {b(n)}, it is shown that p(I) equals 0 or 1 according as a certain series converges or diverges. To obtain the upper limit of S(n)/a(n), one has to set b(n) = +/- epsilon a(n), but finer results are obtained with smaller b(n). No assumptions concerning the under-lying distributions are made; the criteria explain structurally which features of {X(n)} affect the fluctuations, but for concrete results something about P{S(n)>a(n)} must be known. For example, a complete solution is possible when the X(n) are normal, replacing the classical law of the iterated logarithm. Further concrete estimates may be obtained by combining the new criteria with some recently developed limit theorems.

  18. Epistasis between neurochemical gene polymorphisms and risk for ADHD

    PubMed Central

    Segurado, Ricardo; Bellgrove, Mark A; Manconi, Francesca; Gill, Michael; Hawi, Ziarah

    2011-01-01

    A number of genes with function related to synaptic neurochemistry have been genetically associated with attention deficit/hyperactivity disorder. However, susceptibility to the development of common psychiatric disorders by single variants acting alone, can so far only explain a small proportion of the heritability of the phenotype. It has been postulated that the unexplained ‘dark heritability' may at least in part be due to epistatic effects, which may account for the small observed marginal associations, and the difficulties with replication of positive findings. We undertook a comprehensive exploration of pair-wise interactions between genetic variants in 24 candidate genic regions involved in monoaminergic catabolism, anabolism, release, re-uptake and signal transmission in a sample of 177 parent-affected child trios using a case-only design and a case–pseudocontrol design using conditional logistic regression. Marker-pairs thresholded on interaction odds ratio (OR) and P-value are presented. We detected a number of interaction ORs >4.0, including an interesting correlation between markers in the ADRA1B and DBH genes in affected individuals, and several further interesting but smaller effects. These effects are no larger than you would expect by chance under the assumption of independence of all pair-wise relations; however, independence is unlikely. Furthermore, the size of these effects is of interest and attempts to replicate these results in other samples are anticipated. PMID:21368917

  19. Where Are the Logical Errors in the Theory of Big Bang?

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2015-04-01

    The critical analysis of the foundations of the theory of Big Bang is proposed. The unity of formal logic and of rational dialectics is methodological basis of the analysis. It is argued that the starting point of the theory of Big Bang contains three fundamental logical errors. The first error is the assumption that a macroscopic object (having qualitative determinacy) can have an arbitrarily small size and can be in the singular state (i.e., in the state that has no qualitative determinacy). This assumption implies that the transition, (macroscopic object having the qualitative determinacy) --> (singular state of matter that has no qualitative determinacy), leads to loss of information contained in the macroscopic object. The second error is the assumption that there are the void and the boundary between matter and void. But if such boundary existed, then it would mean that the void has dimensions and can be measured. The third error is the assumption that the singular state of matter can make a transition into the normal state without the existence of the program of qualitative and quantitative development of the matter, without controlling influence of other (independent) object. However, these assumptions conflict with the practice and, consequently, formal logic, rational dialectics, and cybernetics. Indeed, from the point of view of cybernetics, the transition, (singular state of the Universe) -->(normal state of the Universe),would be possible only in the case if there was the Managed Object that is outside the Universe and have full, complete, and detailed information about the Universe. Thus, the theory of Big Bang is a scientific fiction.

  20. 7 CFR 765.401 - Conditions for transfer of real estate and chattel security.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 7 2013-01-01 2013-01-01 false Conditions for transfer of real estate and chattel security. 765.401 Section 765.401 Agriculture Regulations of the Department of Agriculture (Continued) FARM... Security and Assumption of Debt § 765.401 Conditions for transfer of real estate and chattel security. (a...

  1. 7 CFR 765.401 - Conditions for transfer of real estate and chattel security.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false Conditions for transfer of real estate and chattel security. 765.401 Section 765.401 Agriculture Regulations of the Department of Agriculture (Continued) FARM... Security and Assumption of Debt § 765.401 Conditions for transfer of real estate and chattel security. (a...

  2. 7 CFR 765.401 - Conditions for transfer of real estate and chattel security.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Conditions for transfer of real estate and chattel security. 765.401 Section 765.401 Agriculture Regulations of the Department of Agriculture (Continued) FARM... Security and Assumption of Debt § 765.401 Conditions for transfer of real estate and chattel security. (a...

  3. 7 CFR 765.401 - Conditions for transfer of real estate and chattel security.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false Conditions for transfer of real estate and chattel security. 765.401 Section 765.401 Agriculture Regulations of the Department of Agriculture (Continued) FARM... Security and Assumption of Debt § 765.401 Conditions for transfer of real estate and chattel security. (a...

  4. 7 CFR 765.401 - Conditions for transfer of real estate and chattel security.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Conditions for transfer of real estate and chattel security. 765.401 Section 765.401 Agriculture Regulations of the Department of Agriculture (Continued) FARM... Security and Assumption of Debt § 765.401 Conditions for transfer of real estate and chattel security. (a...

  5. Twisted Radio Waves and Twisted Thermodynamics

    PubMed Central

    Kish, Laszlo B.; Nevels, Robert D.

    2013-01-01

    We present and analyze a gedanken experiment and show that the assumption that an antenna operating at a single frequency can transmit more than two independent information channels to the far field violates the Second Law of Thermodynamics. Transmission of a large number of channels, each associated with an angular momenta ‘twisted wave’ mode, to the far field in free space is therefore not possible. PMID:23424647

  6. Self-Awareness in Human and Chimpanzee Infants: What is Measured and What is Meant by the Mark and Mirror Test?

    ERIC Educational Resources Information Center

    Bard, Kim A.; Todd, Brenda K.; Bernier, Chris; Love, Jennifer; Leavens, David A.

    2006-01-01

    The objective study of self-recognition, with a mirror and a mark applied to the face, was conducted independently by Gallup (1970) for use with chimpanzees and monkeys, and by Amsterdam (1972) for use with infant humans. Comparative psychologists have followed the model (and assumptions) set by Gallup, whereas developmental psychologists have…

  7. Multicolinearity and Indicator Redundancy Problem in World University Rankings: An Example Using Times Higher Education World University Ranking 2013-2014 Data

    ERIC Educational Resources Information Center

    Kaycheng, Soh

    2015-01-01

    World university ranking systems used the weight-and-sum approach to combined indicator scores into overall scores on which the universities are then ranked. This approach assumes that the indicators all independently contribute to the overall score in the specified proportions. In reality, this assumption is doubtful as the indicators tend to…

  8. The double burden of undernutrition and excess body weight in Mexico.

    PubMed

    Kroker-Lobos, Maria F; Pedroza-Tobías, Andrea; Pedraza, Lilia S; Rivera, Juan A

    2014-12-01

    In Mexico, stunting and anemia have declined but are still high in some regions and subpopulations, whereas overweight and obesity have increased at alarming rates in all age and socioeconomic groups. The objective was to describe the coexistence of stunting, anemia, and overweight and obesity at the national, household, and individual levels. We estimated national prevalences of and trends for stunting, anemia, and overweight and obesity in children aged <5 y and in school-aged children (5-11 y old) and anemia and overweight and obesity in women aged 20-49 y by using the National Health and Nutrition Surveys conducted in 1988, 1999, 2006, and 2012. With the use of the most recent data (2012), the double burden of malnutrition at the household level was estimated and defined as the coexistence of stunting in children aged <5 y and overweight or obesity in the mother. At the individual level, double burden was defined as concurrent stunting and overweight and obesity in children aged 5-11 y and concurrent anemia and overweight or obesity in children aged 5-11 y and in women. We also tested if the coexistence of the conditions corresponded to expected values, under the assumption of independent distributions of each condition. At the household level, the prevalence of concurrent stunting in children aged <5 y and overweight and obesity in mothers was 8.4%; at the individual level, prevalences were 1% for stunting and overweight or obesity and 2.9% for anemia and overweight or obesity in children aged 5-11 y and 7.6% for anemia and overweight or obesity in women. At the household and individual levels in children aged 5-11 y, prevalences of double burden were significantly lower than expected, whereas anemia and the prevalence of overweight or obesity in women were not different from that expected. Although some prevalences of double burden were lower than expected, assuming independent distributions of the 2 conditions, the coexistence of stunting, overweight or obesity, and anemia at the national, household, and intraindividual levels in Mexico calls for policies and programs to prevent the 3 conditions. © 2014 American Society for Nutrition.

  9. Inference of cosmological parameters from gravitational waves: Applications to second generation interferometers

    NASA Astrophysics Data System (ADS)

    Del Pozzo, Walter

    2012-08-01

    The advanced worldwide network of gravitational waves (GW) observatories is scheduled to begin operations within the current decade. Thanks to their improved sensitivity, they promise to yield a number of detections and thus to open new observational windows for astronomy and astrophysics. Among the scientific goals that should be achieved, there is the independent measurement of the value of the cosmological parameters, hence an independent test of the current cosmological paradigm. Because of the importance of such a task, a number of studies have evaluated the capabilities of GW telescopes in this respect. However, since GW do not yield information about the source redshift, different groups have made different assumptions regarding the means through which the GW redshift can be obtained. These different assumptions imply also different methodologies to solve this inference problem. This work presents a formalism based on Bayesian inference developed to facilitate the inclusion of all assumptions and prior information about a GW source within a single data analysis framework. This approach guarantees the minimization of information loss and the possibility of including naturally event-specific knowledge (such as the sky position for a gamma ray burst-GW coincident observation) in the analysis. The workings of the method are applied to a specific example, loosely designed along the lines of the method proposed by Schutz in 1986, in which one uses information from wide-field galaxy surveys as prior information for the location of a GW source. I show that combining the results from few tens of observations from a network of advanced interferometers will constrain the Hubble constant H0 to an accuracy of ˜4%-5% at 95% confidence.

  10. Assessing the Performance of a Computer-Based Policy Model of HIV and AIDS

    PubMed Central

    Rydzak, Chara E.; Cotich, Kara L.; Sax, Paul E.; Hsu, Heather E.; Wang, Bingxia; Losina, Elena; Freedberg, Kenneth A.; Weinstein, Milton C.; Goldie, Sue J.

    2010-01-01

    Background Model-based analyses, conducted within a decision analytic framework, provide a systematic way to combine information about the natural history of disease and effectiveness of clinical management strategies with demographic and epidemiological characteristics of the population. Among the challenges with disease-specific modeling include the need to identify influential assumptions and to assess the face validity and internal consistency of the model. Methods and Findings We describe a series of exercises involved in adapting a computer-based simulation model of HIV disease to the Women's Interagency HIV Study (WIHS) cohort and assess model performance as we re-parameterized the model to address policy questions in the U.S. relevant to HIV-infected women using data from the WIHS. Empiric calibration targets included 24-month survival curves stratified by treatment status and CD4 cell count. The most influential assumptions in untreated women included chronic HIV-associated mortality following an opportunistic infection, and in treated women, the ‘clinical effectiveness’ of HAART and the ability of HAART to prevent HIV complications independent of virologic suppression. Good-fitting parameter sets required reductions in the clinical effectiveness of 1st and 2nd line HAART and improvements in 3rd and 4th line regimens. Projected rates of treatment regimen switching using the calibrated cohort-specific model closely approximated independent analyses published using data from the WIHS. Conclusions The model demonstrated good internal consistency and face validity, and supported cohort heterogeneities that have been reported in the literature. Iterative assessment of model performance can provide information about the relative influence of uncertain assumptions and provide insight into heterogeneities within and between cohorts. Description of calibration exercises can enhance the transparency of disease-specific models. PMID:20844741

  11. Assessing the performance of a computer-based policy model of HIV and AIDS.

    PubMed

    Rydzak, Chara E; Cotich, Kara L; Sax, Paul E; Hsu, Heather E; Wang, Bingxia; Losina, Elena; Freedberg, Kenneth A; Weinstein, Milton C; Goldie, Sue J

    2010-09-09

    Model-based analyses, conducted within a decision analytic framework, provide a systematic way to combine information about the natural history of disease and effectiveness of clinical management strategies with demographic and epidemiological characteristics of the population. Among the challenges with disease-specific modeling include the need to identify influential assumptions and to assess the face validity and internal consistency of the model. We describe a series of exercises involved in adapting a computer-based simulation model of HIV disease to the Women's Interagency HIV Study (WIHS) cohort and assess model performance as we re-parameterized the model to address policy questions in the U.S. relevant to HIV-infected women using data from the WIHS. Empiric calibration targets included 24-month survival curves stratified by treatment status and CD4 cell count. The most influential assumptions in untreated women included chronic HIV-associated mortality following an opportunistic infection, and in treated women, the 'clinical effectiveness' of HAART and the ability of HAART to prevent HIV complications independent of virologic suppression. Good-fitting parameter sets required reductions in the clinical effectiveness of 1st and 2nd line HAART and improvements in 3rd and 4th line regimens. Projected rates of treatment regimen switching using the calibrated cohort-specific model closely approximated independent analyses published using data from the WIHS. The model demonstrated good internal consistency and face validity, and supported cohort heterogeneities that have been reported in the literature. Iterative assessment of model performance can provide information about the relative influence of uncertain assumptions and provide insight into heterogeneities within and between cohorts. Description of calibration exercises can enhance the transparency of disease-specific models.

  12. Discrepancies between CFHTLenS cosmic shear and Planck: new physics or systematic effects?

    NASA Astrophysics Data System (ADS)

    Kitching, Thomas D.; Verde, Licia; Heavens, Alan F.; Jimenez, Raul

    2016-06-01

    There is currently a discrepancy in the measured value of the amplitude of matter clustering, parametrized using σ8, inferred from galaxy weak lensing, and cosmic microwave background (CMB) data, which could be an indication of new physics, such as massive neutrinos or a modification to the gravity law, or baryon feedback. In this paper we make the assumption that the cosmological parameters are well determined by Planck, and use weak lensing data to investigate the implications for baryon feedback and massive neutrinos, as well as possible contributions from intrinsic alignments and biases in photometric redshifts. We apply a non-parametric approach to model the baryonic feedback on the dark matter clustering, which is flexible enough to reproduce the OWLS (OverWhelmingly Large Simulations) and Illustris simulation results. The statistic we use, 3D cosmic shear, is a method that extracts cosmological information from weak lensing data using a spherical-Bessel function power spectrum approach. We analyse the CFHTLenS weak lensing data and, assuming best-fitting cosmological parameters from the Planck CMB experiment, find that there is no evidence for baryonic feedback on the dark matter power spectrum, but there is evidence for a bias in the photometric redshifts in the CFHTLenS data, consistent with a completely independent analysis by Choi et al., based on spectroscopic redshifts, and that these conclusions are robust to assumptions about the intrinsic alignment systematic. We also find an upper limit, of <0.28 eV (1σ), to the sum of neutrino masses conditional on other Λ-cold-dark-matter parameters being fixed.

  13. Maximization, learning, and economic behavior

    PubMed Central

    Erev, Ido; Roth, Alvin E.

    2014-01-01

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182

  14. Maximization, learning, and economic behavior.

    PubMed

    Erev, Ido; Roth, Alvin E

    2014-07-22

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.

  15. Modeling intelligent adversaries for terrorism risk assessment: some necessary conditions for adversary models.

    PubMed

    Guikema, Seth

    2012-07-01

    Intelligent adversary modeling has become increasingly important for risk analysis, and a number of different approaches have been proposed for incorporating intelligent adversaries in risk analysis models. However, these approaches are based on a range of often-implicit assumptions about the desirable properties of intelligent adversary models. This "Perspective" paper aims to further risk analysis for situations involving intelligent adversaries by fostering a discussion of the desirable properties for these models. A set of four basic necessary conditions for intelligent adversary models is proposed and discussed. These are: (1) behavioral accuracy to the degree possible, (2) computational tractability to support decision making, (3) explicit consideration of uncertainty, and (4) ability to gain confidence in the model. It is hoped that these suggested necessary conditions foster discussion about the goals and assumptions underlying intelligent adversary modeling in risk analysis. © 2011 Society for Risk Analysis.

  16. A genuinely discontinuous approach for multiphase EHD problems

    NASA Astrophysics Data System (ADS)

    Natarajan, Mahesh; Desjardins, Olivier

    2017-11-01

    Electrohydrodynamics (EHD) involves solving the Poisson equation for the electric field potential. For multiphase flows, although the electric field potential is a continuous quantity, due to the discontinuity in the electric permittivity between the phases, additional jump conditions at the interface, for the normal and tangential components of the electric field need to be satisfied. All approaches till date either ignore the jump conditions, or involve simplifying assumptions, and hence yield unconvincing results even for simple test problems. In the present work, we develop a genuinely discontinuous approach for the Poisson equation for multiphase flows using a Finite Volume Unsplit Volume of Fluid method. The governing equation and the jump conditions without assumptions are used to develop the method, and its efficiency is demonstrated by comparison of the numerical results with canonical test problems having exact solutions. Postdoctoral Associate, Department of Mechanical and Aerospace Engineering.

  17. A Summary of Publications on the Development of Mode-of ...

    EPA Pesticide Factsheets

    Chemical contaminants are formed as a consequence of chemical disinfection of public drinking waters. Chemical disinfectants, which are used to kill harmful microorganisms, react with natural organic matter (NOM), bromide, iodide, and other compounds, forming complex mixtures of potentially toxic disinfection byproducts (DBPs). The types and concentrations of DBPs formed during disinfection and the relative proportions of the components vary depending on factors such as source water conditions (e.g., types of NOM present), disinfectant type (e.g., chlorine, ozone, chloramine), and treatment conditions (e.g., pH and temperature). To date, over 500 DBPs have been detected in treated waters. However, typically more than 50% of the organic halide mass produced by chlorination disinfection consists of unidentified chemicals, which are not measured by routine analyses of DBPs. The protocols and methods typically used to evaluate chemical mixtures are best applied to simple defined mixtures consisting of relatively few chemicals. These approaches rely on assumptions (e.g., common mode of action, independent toxic action) regarding the type of joint toxic action (e.g., dose-additivity, synergism) that might be observed. Such methods, used for site assessments or toxicological studies, are often not sufficient to estimate health risk for complex drinking water DBP mixtures. Actual drinking water exposures involve multiple chemicals, many of w

  18. A theoretical basis for the analysis of redundant software subject to coincident errors

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques fault-tolerant software, is an understanding of the impact of multiple-joint occurrences of coincident errors. A theoretical basis for the study of redundant software is developed which provides a probabilistic framework for empirically evaluating the effectiveness of the general (N-Version) strategy when component versions are subject to coincident errors, and permits an analytical study of the effects of these errors. The basic assumptions of the model are: (1) independently designed software components are chosen in a random sample; and (2) in the user environment, the system is required to execute on a stationary input series. The intensity of coincident errors, has a central role in the model. This function describes the propensity to introduce design faults in such a way that software components fail together when executing in the user environment. The model is used to give conditions under which an N-Version system is a better strategy for reducing system failure probability than relying on a single version of software. A condition which limits the effectiveness of a fault-tolerant strategy is studied, and it is posted whether system failure probability varies monotonically with increasing N or whether an optimal choice of N exists.

  19. Evolution of cooperation in a finite homogeneous graph.

    PubMed

    Taylor, Peter D; Day, Troy; Wild, Geoff

    2007-05-24

    Recent theoretical studies of selection in finite structured populations have worked with one of two measures of selective advantage of an allele: fixation probability and inclusive fitness. Each approach has its own analytical strengths, but given certain assumptions they provide equivalent results. In most instances the structure of the population can be specified by a network of nodes connected by edges (that is, a graph), and much of the work here has focused on a continuous-time model of evolution, first described by ref. 11. Working in this context, we provide an inclusive fitness analysis to derive a surprisingly simple analytical condition for the selective advantage of a cooperative allele in any graph for which the structure satisfies a general symmetry condition ('bi-transitivity'). Our results hold for a broad class of population structures, including most of those analysed previously, as well as some for which a direct calculation of fixation probability has appeared intractable. Notably, under some forms of population regulation, the ability of a cooperative allele to invade is seen to be independent of the nature of population structure (and in particular of how game partnerships are specified) and is identical to that for an unstructured population. For other types of population regulation our results reveal that cooperation can invade if players choose partners along relatively 'high-weight' edges.

  20. Damping mathematical modelling and dynamic responses for FRP laminated composite plates with polymer matrix

    NASA Astrophysics Data System (ADS)

    Liu, Qimao

    2018-02-01

    This paper proposes an assumption that the fibre is elastic material and polymer matrix is viscoelastic material so that the energy dissipation depends only on the polymer matrix in dynamic response process. The damping force vectors in frequency and time domains, of FRP (Fibre-Reinforced Polymer matrix) laminated composite plates, are derived based on this assumption. The governing equations of FRP laminated composite plates are formulated in both frequency and time domains. The direct inversion method and direct time integration method for nonviscously damped systems are employed to solve the governing equations and achieve the dynamic responses in frequency and time domains, respectively. The computational procedure is given in detail. Finally, dynamic responses (frequency responses with nonzero and zero initial conditions, free vibration, forced vibrations with nonzero and zero initial conditions) of a FRP laminated composite plate are computed using the proposed methodology. The proposed methodology in this paper is easy to be inserted into the commercial finite element analysis software. The proposed assumption, based on the theory of material mechanics, needs to be further proved by experiment technique in the future.

  1. Latent class instrumental variables: a clinical and biostatistical perspective.

    PubMed

    Baker, Stuart G; Kramer, Barnett S; Lindeman, Karen S

    2016-01-15

    In some two-arm randomized trials, some participants receive the treatment assigned to the other arm as a result of technical problems, refusal of a treatment invitation, or a choice of treatment in an encouragement design. In some before-and-after studies, the availability of a new treatment changes from one time period to this next. Under assumptions that are often reasonable, the latent class instrumental variable (IV) method estimates the effect of treatment received in the aforementioned scenarios involving all-or-none compliance and all-or-none availability. Key aspects are four initial latent classes (sometimes called principal strata) based on treatment received if in each randomization group or time period, the exclusion restriction assumption (in which randomization group or time period is an instrumental variable), the monotonicity assumption (which drops an implausible latent class from the analysis), and the estimated effect of receiving treatment in one latent class (sometimes called efficacy, the local average treatment effect, or the complier average causal effect). Since its independent formulations in the biostatistics and econometrics literatures, the latent class IV method (which has no well-established name) has gained increasing popularity. We review the latent class IV method from a clinical and biostatistical perspective, focusing on underlying assumptions, methodological extensions, and applications in our fields of obstetrics and cancer research. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Evaluating methodological assumptions of a catch-curve survival estimation of unmarked precocial shorebird chickes

    USGS Publications Warehouse

    McGowan, Conor P.; Gardner, Beth

    2013-01-01

    Estimating productivity for precocial species can be difficult because young birds leave their nest within hours or days of hatching and detectability thereafter can be very low. Recently, a method for using a modified catch-curve to estimate precocial chick daily survival for age based count data was presented using Piping Plover (Charadrius melodus) data from the Missouri River. However, many of the assumptions of the catch-curve approach were not fully evaluated for precocial chicks. We developed a simulation model to mimic Piping Plovers, a fairly representative shorebird, and age-based count-data collection. Using the simulated data, we calculated daily survival estimates and compared them with the known daily survival rates from the simulation model. We conducted these comparisons under different sampling scenarios where the ecological and statistical assumptions had been violated. Overall, the daily survival estimates calculated from the simulated data corresponded well with true survival rates of the simulation. Violating the accurate aging and the independence assumptions did not result in biased daily survival estimates, whereas unequal detection for younger or older birds and violating the birth death equilibrium did result in estimator bias. Assuring that all ages are equally detectable and timing data collection to approximately meet the birth death equilibrium are key to the successful use of this method for precocial shorebirds.

  3. Detection and Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of GPS Time Series

    NASA Astrophysics Data System (ADS)

    Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.

    2014-12-01

    A critical point in the analysis of ground displacements time series is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies. Indeed, PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem, i.e. in recovering and separating the original sources that generated the observed data. This is mainly due to the assumptions on which PCA relies: it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here we present the application of the vbICA technique to GPS position time series. First, we use vbICA on synthetic data that simulate a seismic cycle (interseismic + coseismic + postseismic + seasonal + noise), and study the ability of the algorithm to recover the original (known) sources of deformation. Secondly, we apply vbICA to different tectonically active scenarios, such as earthquakes in central and northern Italy, as well as the study of slow slip events in Cascadia.

  4. Tip of the Tongue States Increase Under Evaluative Observation.

    PubMed

    James, Lori E; Schmank, Christopher J; Castro, Nichol; Buchanan, Tony W

    2018-02-01

    We tested the frequent assumption that the difficulty of word retrieval increases when a speaker is being observed and evaluated. We modified the Trier Social Stress Test (TSST) so that participants believed that its evaluative observation components continued throughout the duration of a subsequent word retrieval task, and measured participants' reported tip of the tongue (TOT) states. Participants in this TSST condition experienced more TOTs than participants in a comparable, placebo TSST condition in which there was no suggestion of evaluative observation. This experiment provides initial evidence confirming the assumption that evaluative observation by a third party can be disruptive to word retrieval. We interpret our findings by proposing an extension to a well-supported theoretical model of TOTs.

  5. Experimental investigation of two-phase heat transfer in a porous matrix.

    NASA Technical Reports Server (NTRS)

    Von Reth, R.; Frost, W.

    1972-01-01

    One-dimensional two-phase flow transpiration cooling through porous metal is studied experimentally. The experimental data is compared with a previous one-dimensional analysis. Good agreement with calculated temperature distribution is obtained as long as the basic assumptions of the analytical model are satisfied. Deviations from the basic assumptions are caused by nonhomogeneous and oscillating flow conditions. Preliminary derivation of nondimensional parameters which characterize the stable and unstable flow conditions is given. Superheated liquid droplets observed sputtering from the heated surface indicated incomplete evaporation at heat fluxes well in access of the latent energy transport. A parameter is developed to account for the nonequilibrium thermodynamic effects. Measured and calculated pressure drops show contradicting trends which are attributed to capillary forces.

  6. Secondary analysis of a scoping review of health promotion interventions for persons with disabilities: Do health promotion interventions for people with mobility impairments address secondary condition reduction and increased community participation?

    PubMed

    White, Glen W; Gonda, Chiaki; Peterson, Jana J; Drum, Charles E

    2011-04-01

    Secondary conditions can have very serious outcomes for people with physical disabilities. Such consequences can range from immobility due to pressure sores to withdrawal and isolation due to depression, decreasing participation in the community. To further investigate these assumptions, we conducted a review of the literature on health promotion interventions that include physical activity for adults with disabilities to determine whether they have a positive effect on the reduction of secondary conditions and increased community participation. We conducted a secondary analysis of the results of a scoping review of health promotion programs containing physical activity for people with mobility impairments (N = 5). This secondary analysis examined the relationship between health promotion containing physical activity and prevention of secondary conditions among people with various physical disabilities. We further examined evidence and effects of independent variables on the outcome of increased community participation for study participants. The outcomes from this investigation are varied, with 2 studies providing evidence of reducing secondary conditions while another shared anecdotal statements referencing a decrease in secondary conditions. Of the remaining 2 studies in this paper, 1 showed no intervention effect on reducing secondary conditions while the remaining study reported an increase in secondary conditions. Regarding increased participation in the community, 2 of 5 studies directly reported on these outcomes, while increased community participation was referenced in another 2 articles, but without any data presented. The final study did not report on any post intervention in the community. This review demonstrates that research on health promotion interventions containing physical activity lack description about whether such interventions help reduce or prevent secondary conditions. Additionally, the review shows that further work is needed in terms of sustaining health programs effects beyond the initial proximal activity gains, with attention given toward more distal outcomes of increased participant participation in the community. Copyright © 2011. Published by Elsevier Inc.

  7. General aptitude and the assumption of truth in deductively rational reasoning about probable but false antecedent to consequent relations

    PubMed Central

    Schroyens, Walter; Fleerackers, Lieve; Maes, Sunile

    2010-01-01

    Two experiments (N1 = 117 and N2 = 245) on reasoning with knowledge-rich conditionals showed a main effect of logical validity, which was due to the negative effect of counter-examples being smaller for valid than for invalid arguments. These findings support the thesis that some people tend to inhibit background inconsistent with the hypothetical truth of the premises, while others tend to abandon the implicit truth-assumption when they have factual evidence to the contrary. Findings show that adhering to the truth-assumption in the face of conflicting evidence to the contrary requires an investment of time and effort which people with a higher general aptitude are more likely to do. PMID:21228921

  8. Characterization and parameterization of aerosol cloud condensation nuclei activation under different pollution conditions

    PubMed Central

    Che, H. C.; Zhang, X. Y.; Wang, Y. Q.; Zhang, L.; Shen, X. J.; Zhang, Y. M.; Ma, Q. L.; Sun, J. Y.; Zhang, Y. W.; Wang, T. T.

    2016-01-01

    To better understand the cloud condensation nuclei (CCN) activation capacity of aerosol particles in different pollution conditions, a long-term field experiment was carried out at a regional GAW (Global Atmosphere Watch) station in the Yangtze River Delta area of China. The homogeneity of aerosol particles was the highest in clean weather, with the highest active fraction of all the weather types. For pollution with the same visibility, the residual aerosol particles in higher relative humidity weather conditions were more externally mixed and heterogeneous, with a lower hygroscopic capacity. The hygroscopic capacity (κ) of organic aerosols can be classified into 0.1 and 0.2 in different weather types. The particles at ~150 nm were easily activated in haze weather conditions. For CCN predictions, the bulk chemical composition method was closer to observations at low supersaturations (≤0.1%), whereas when the supersaturation was ≥0.2%, the size-resolved chemical composition method was more accurate. As for the mixing state of the aerosol particles, in haze, heavy haze, and severe haze weather conditions CCN predictions based on the internal mixing assumption were robust, whereas for other weather conditions, predictions based on the external mixing assumption were more accurate. PMID:27075947

  9. Overcoming multicollinearity in multiple regression using correlation coefficient

    NASA Astrophysics Data System (ADS)

    Zainodin, H. J.; Yap, S. J.

    2013-09-01

    Multicollinearity happens when there are high correlations among independent variables. In this case, it would be difficult to distinguish between the contributions of these independent variables to that of the dependent variable as they may compete to explain much of the similar variance. Besides, the problem of multicollinearity also violates the assumption of multiple regression: that there is no collinearity among the possible independent variables. Thus, an alternative approach is introduced in overcoming the multicollinearity problem in achieving a well represented model eventually. This approach is accomplished by removing the multicollinearity source variables on the basis of the correlation coefficient values based on full correlation matrix. Using the full correlation matrix can facilitate the implementation of Excel function in removing the multicollinearity source variables. It is found that this procedure is easier and time-saving especially when dealing with greater number of independent variables in a model and a large number of all possible models. Hence, in this paper detailed insight of the procedure is shown, compared and implemented.

  10. Conservatism in America--What Does it Mean for Teacher Education?

    ERIC Educational Resources Information Center

    Dolce, Carl J.

    The current conflict among opposing sets of cultural ideals is illustrated by several interrelated conditions. The conservative phenomenon is more complex than the traditional liberal-conservative dichotomy would suggest. Changes in societal conditions invite a reexamination of basic assumptions across the broad spectrum of political ideology.…

  11. Exploring Pattern of Socialisation Conditions and Human Development by Nonlinear Multivariate Analysis.

    ERIC Educational Resources Information Center

    Grundmann, Matthias

    Following the assumptions of ecological socialization research, adequate analysis of socialization conditions must take into account the multilevel and multivariate structure of social factors that impact on human development. This statement implies that complex models of family configurations or of socialization factors are needed to explain the…

  12. Psychologic-Pedagogical Conditions for Prevention of Suicidal Tendencies among Teenagers

    ERIC Educational Resources Information Center

    Abil, Yerkin A.; Kim, Natalia P.; Baymuhambetova, Botagoz Sh.; Mamiyev, Nurlan B.; Li, Yelena D.; Shumeyko, Tatyana S.

    2016-01-01

    Aim of research: to develop complex of psychology-pedagogical conditions, directed on prevention of suicidal tendencies among teenagers. On analysis basis of scientific literature authors disclose main causes of suicidal behavior in adolescence. To confirm science veracity of advanced theoretic assumptions, describes experiment, conducted on basis…

  13. Conditional Processes of Effective Instructor Communication and Increases in Students' Cognitive Learning

    ERIC Educational Resources Information Center

    Bolkan, San; Goodboy, Alan K.; Myers, Scott A.

    2017-01-01

    This study examined two effective teaching behaviors traditionally considered by instructional communication scholars to associate positively with students' academic experiences: instructor clarity and immediacy. Our study situated these teaching behaviors in a conditional process model that integrated two key assumptions about student learning:…

  14. 77 FR 66936 - Special Conditions: Turbomeca Ardiden 3K Turboshaft Engine

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-08

    ... Limitations Section limits in accordance with those assumptions, such that engine deterioration is not... rating can be used during any one flight, and there is no cumulative time limitation. Special conditions... in one flight is not limited, and there is no cumulative time limitation. Applicability As discussed...

  15. Using Correlation to Compute Better Probability Estimates in Plan Graphs

    NASA Technical Reports Server (NTRS)

    Bryce, Daniel; Smith, David E.

    2006-01-01

    Plan graphs are commonly used in planning to help compute heuristic "distance" estimates between states and goals. A few authors have also attempted to use plan graphs in probabilistic planning to compute estimates of the probability that propositions can be achieved and actions can be performed. This is done by propagating probability information forward through the plan graph from the initial conditions through each possible action to the action effects, and hence to the propositions at the next layer of the plan graph. The problem with these calculations is that they make very strong independence assumptions - in particular, they usually assume that the preconditions for each action are independent of each other. This can lead to gross overestimates in probability when the plans for those preconditions interfere with each other. It can also lead to gross underestimates of probability when there is synergy between the plans for two or more preconditions. In this paper we introduce a notion of the binary correlation between two propositions and actions within a plan graph, show how to propagate this information within a plan graph, and show how this improves probability estimates for planning. This notion of correlation can be thought of as a continuous generalization of the notion of mutual exclusion (mutex) often used in plan graphs. At one extreme (correlation=0) two propositions or actions are completely mutex. With correlation = 1, two propositions or actions are independent, and with correlation > 1, two propositions or actions are synergistic. Intermediate values can and do occur indicating different degrees to which propositions and action interfere or are synergistic. We compare this approach with another recent approach by Bryce that computes probability estimates using Monte Carlo simulation of possible worlds in plan graphs.

  16. Beyond Markov: Accounting for independence violations in causal reasoning.

    PubMed

    Rehder, Bob

    2018-06-01

    Although many theories of causal cognition are based on causal graphical models, a key property of such models-the independence relations stipulated by the Markov condition-is routinely violated by human reasoners. This article presents three new accounts of those independence violations, accounts that share the assumption that people's understanding of the correlational structure of data generated from a causal graph differs from that stipulated by causal graphical model framework. To distinguish these models, experiments assessed how people reason with causal graphs that are larger than those tested in previous studies. A traditional common cause network (Y 1 ←X→Y 2 ) was extended so that the effects themselves had effects (Z 1 ←Y 1 ←X→Y 2 →Z 2 ). A traditional common effect network (Y 1 →X←Y 2 ) was extended so that the causes themselves had causes (Z 1 →Y 1 →X←Y 2 ←Z 2 ). Subjects' inferences were most consistent with the beta-Q model in which consistent states of the world-those in which variables are either mostly all present or mostly all absent-are viewed as more probable than stipulated by the causal graphical model framework. Substantial variability in subjects' inferences was also observed, with the result that substantial minorities of subjects were best fit by one of the other models (the dual prototype or a leaky gate models). The discrepancy between normative and human causal cognition stipulated by these models is foundational in the sense that they locate the error not in people's causal reasoning but rather in their causal representations. As a result, they are applicable to any cognitive theory grounded in causal graphical models, including theories of analogy, learning, explanation, categorization, decision-making, and counterfactual reasoning. Preliminary evidence that independence violations indeed generalize to other judgment types is presented. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Development of quality indicators for low-risk labor care provided by midwives using a RAND-modified Delphi method.

    PubMed

    Ueda, Kayo; Ohtera, Shosuke; Kaso, Misato; Nakayama, Takeo

    2017-09-22

    In childbirth, most deliveries are low-risk, defined as spontaneous labor at full term without special high-risk facts or complications, especially in high-resource countries where maternal and perinatal mortality rates are very low. Indeed, the majority of mothers and infants have no serious conditions during labor. However, the quality of care provided is not assured, and performance may vary by birthing facility and provider. The overuse of technology in childbirth in some parts of the world is almost certainly based on assumptions like, "something can go wrong at any minute." There is a need to assess the quality of care provided for mothers and infants in low-risk labor. We aimed to develop specific quality indicators for low-risk labor care provided primarily by midwives in Japan. We used a RAND-modified Delphi method, which integrates evidence review with expert consensus development. The procedure comprises five steps: (1) literature review, including clinical practice guidelines, to extract and develop quality indicator candidates; (2) formation of a multidisciplinary panel; (3) independent panel ratings (Round 1); (4) panel meeting and independent panel ratings (Round 2); and (5) independent panel ratings (Round 3). The three independent panel ratings (Rounds 1-3) were held between July and December 2012. The assembled multidisciplinary panel comprised eight clinicians (two pediatricians, three obstetricians, and three midwives) and three mothers who were nonclinicians. Evidentiary review extracted 166 key recommendations from 32 clinical practice guidelines, and 31 existing quality indicators were added. After excluding duplicate recommendations and quality indicators, the panel discussed 25 candidate indicators. Of these, 18 were adopted, one was modified, six were not adopted, and four were added during the meeting, respectively. We established 23 quality indicators for low-risk labor care provided by midwives in labor units in Japan.

  18. BAYESIAN SEMI-BLIND COMPONENT SEPARATION FOR FOREGROUND REMOVAL IN INTERFEROMETRIC 21 cm OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Le; Timbie, Peter T.; Bunn, Emory F.

    In this paper, we present a new Bayesian semi-blind approach for foreground removal in observations of the 21 cm signal measured by interferometers. The technique, which we call H i Expectation–Maximization Independent Component Analysis (HIEMICA), is an extension of the Independent Component Analysis technique developed for two-dimensional (2D) cosmic microwave background maps to three-dimensional (3D) 21 cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from the signal based on the diversity of their power spectra. Relying only on the statistical independence of the components, this approachmore » can jointly estimate the 3D power spectrum of the 21 cm signal, as well as the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about the foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21 cm intensity mapping observations under idealized assumptions of instrumental effects. We also discuss the impact when the noise properties are not known completely. As a first step toward solving the 21 cm power spectrum analysis problem, we compare the semi-blind HIEMICA technique to the commonly used Principal Component Analysis. Under the same idealized circumstances, the proposed technique provides significantly improved recovery of the power spectrum. This technique can be applied in a straightforward manner to all 21 cm interferometric observations, including epoch of reionization measurements, and can be extended to single-dish observations as well.« less

  19. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  20. Testing common assumptions in studies of songbird nest success

    USGS Publications Warehouse

    Streby, Henry M.; Andersen, David

    2013-01-01

    We studied Ovenbird Seiurus aurocapilla and Golden-winged Warbler Vermivora chrysopterapopulations in northern Minnesota, USA, to test two common assumptions in studies of songbird nest success: (1) that the condition of an empty nest on or near its expected fledge date is an indicator of nest fate; and (2) that the presence of a fledgling or family group within a territory confirms a successful nest in that territory. We monitored the condition of nests and used radiotelemetry to monitor juveniles through the expected fledging date and early post-fledging period. Of nests that contained nestlings 1–2 days before the expected fledge date, fates were misidentified using nest condition alone for 9.5% of Ovenbird nests, but those misidentifications were made in both directions (succeeded or failed), yielding only a small bias in estimated nest success. However, 20% of Golden-winged Warbler nests were misidentified as successful using nest condition during the final visit interval, biasing the nest success estimate upward by 21–28% depending on the treatment of uncertain nest fates. Fledgling Ovenbirds from 58% of nests travelled beyond their natal territory within 24 h, rising to 98% after 5 days, and those fledglings travelled up to 390 m from nests within 10 days of fledging. Fledgling Golden-winged Warblers from 13% of nests travelled beyond their natal territory within 24 h, rising to 85% after 5 days, and those fledglings travelled up to 510 m from nests within 10 days of fledging. We conclude that nest condition and fledgling presence can be misleading indicators of nest fate, probably commonly biasing nest success estimates upward, and we recommend that these assumptions should be tested in additional species.

  1. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  2. Sensitivity of wildlife habitat models to uncertainties in GIS data

    NASA Technical Reports Server (NTRS)

    Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.

    1992-01-01

    Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.

  3. Modelling lecturer performance index of private university in Tulungagung by using survival analysis with multivariate adaptive regression spline

    NASA Astrophysics Data System (ADS)

    Hasyim, M.; Prastyo, D. D.

    2018-03-01

    Survival analysis performs relationship between independent variables and survival time as dependent variable. In fact, not all survival data can be recorded completely by any reasons. In such situation, the data is called censored data. Moreover, several model for survival analysis requires assumptions. One of the approaches in survival analysis is nonparametric that gives more relax assumption. In this research, the nonparametric approach that is employed is Multivariate Regression Adaptive Spline (MARS). This study is aimed to measure the performance of private university’s lecturer. The survival time in this study is duration needed by lecturer to obtain their professional certificate. The results show that research activities is a significant factor along with developing courses material, good publication in international or national journal, and activities in research collaboration.

  4. The power of charisma—perceived charisma inhibits the frontal executive network of believers in intercessory prayer

    PubMed Central

    Stødkilde-Jørgensen, Hans; Geertz, Armin W.; Lund, Torben E.; Roepstorff, Andreas

    2011-01-01

    This study used functional magnetic resonance imaging to investigate how assumptions about speakers’ abilities changed the evoked BOLD response in secular and Christian participants who received intercessory prayer. We find that recipients’ assumptions about senders’ charismatic abilities have important effects on their executive network. Most notably, the Christian participants deactivated the frontal network consisting of the medial and the dorsolateral prefrontal cortex bilaterally in response to speakers who they believed had healing abilities. An independent analysis across subjects revealed that this deactivation predicted the Christian participants’ subsequent ratings of the speakers’ charisma and experience of God’s presence during prayer. These observations point to an important mechanism of authority that may facilitate charismatic influence, a mechanism which is likely to be present in other interpersonal interactions as well. PMID:20228138

  5. Effect of Item Response Theory (IRT) Model Selection on Testlet-Based Test Equating. Research Report. ETS RR-14-19

    ERIC Educational Resources Information Center

    Cao, Yi; Lu, Ru; Tao, Wei

    2014-01-01

    The local item independence assumption underlying traditional item response theory (IRT) models is often not met for tests composed of testlets. There are 3 major approaches to addressing this issue: (a) ignore the violation and use a dichotomous IRT model (e.g., the 2-parameter logistic [2PL] model), (b) combine the interdependent items to form a…

  6. Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data.

    PubMed

    Li, Johnson Ching-Hong

    2016-12-01

    In psychological science, the "new statistics" refer to the new statistical practices that focus on effect size (ES) evaluation instead of conventional null-hypothesis significance testing (Cumming, Psychological Science, 25, 7-29, 2014). In a two-independent-samples scenario, Cohen's (1988) standardized mean difference (d) is the most popular ES, but its accuracy relies on two assumptions: normality and homogeneity of variances. Five other ESs-the unscaled robust d (d r * ; Hogarty & Kromrey, 2001), scaled robust d (d r ; Algina, Keselman, & Penfield, Psychological Methods, 10, 317-328, 2005), point-biserial correlation (r pb ; McGrath & Meyer, Psychological Methods, 11, 386-401, 2006), common-language ES (CL; Cliff, Psychological Bulletin, 114, 494-509, 1993), and nonparametric estimator for CL (A w ; Ruscio, Psychological Methods, 13, 19-30, 2008)-may be robust to violations of these assumptions, but no study has systematically evaluated their performance. Thus, in this simulation study the performance of these six ESs was examined across five factors: data distribution, sample, base rate, variance ratio, and sample size. The results showed that A w and d r were generally robust to these violations, and A w slightly outperformed d r . Implications for the use of A w and d r in real-world research are discussed.

  7. Effect of particle size on droplet infiltration into hydrophobic porous media as a model of water repellent soil.

    PubMed

    Hamlett, Christopher A E; Shirtcliffe, Neil J; McHale, Glen; Ahn, Sujung; Bryant, Robert; Doerr, Stefan H; Newton, Michael I

    2011-11-15

    The wettability of soil is of great importance for plants and soil biota, and in determining the risk for preferential flow, surface runoff, flooding,and soil erosion. The molarity of ethanol droplet (MED) test is widely used for quantifying the severity of water repellency in soils that show reduced wettability and is assumed to be independent of soil particle size. The minimum ethanol concentration at which droplet penetration occurs within a short time (≤ 10 s) provides an estimate of the initial advancing contact angle at which spontaneous wetting is expected. In this study, we test the assumption of particle size independence using a simple model of soil, represented by layers of small (~0.2-2 mm) diameter beads that predict the effect of changing bead radius in the top layer on capillary driven imbibition. Experimental results using a three-layer bead system show broad agreement with the model and demonstrate a dependence of the MED test on particle size. The results show that the critical initial advancing contact angle for penetration can be considerably less than 90° and varies with particle size, demonstrating that a key assumption currently used in the MED testing of soil is not necessarily valid.

  8. Quantifying biological samples using Linear Poisson Independent Component Analysis for MALDI-ToF mass spectra

    PubMed Central

    Deepaisarn, S; Tar, P D; Thacker, N A; Seepujak, A; McMahon, A W

    2018-01-01

    Abstract Motivation Matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI) facilitates the analysis of large organic molecules. However, the complexity of biological samples and MALDI data acquisition leads to high levels of variation, making reliable quantification of samples difficult. We present a new analysis approach that we believe is well-suited to the properties of MALDI mass spectra, based upon an Independent Component Analysis derived for Poisson sampled data. Simple analyses have been limited to studying small numbers of mass peaks, via peak ratios, which is known to be inefficient. Conventional PCA and ICA methods have also been applied, which extract correlations between any number of peaks, but we argue makes inappropriate assumptions regarding data noise, i.e. uniform and Gaussian. Results We provide evidence that the Gaussian assumption is incorrect, motivating the need for our Poisson approach. The method is demonstrated by making proportion measurements from lipid-rich binary mixtures of lamb brain and liver, and also goat and cow milk. These allow our measurements and error predictions to be compared to ground truth. Availability and implementation Software is available via the open source image analysis system TINA Vision, www.tina-vision.net. Contact paul.tar@manchester.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29091994

  9. Radio afterglow rebrightening: evidence for multiple active phases in gamma-ray burst central engines

    NASA Astrophysics Data System (ADS)

    Li, Long-Biao; Zhang, Zhi-Bin; Rice, Jared

    2015-09-01

    The rebrightening phenomenon is an interesting feature in some X-ray, optical, and radio afterglows of gamma-ray bursts (GRBs). Here, we propose a possible energy-supply assumption to explain the rebrightenings of radio afterglows, in which the central engine with multiple active phases can supply at least two GRB pulses in a typical GRB duration time. Considering the case of double pulses supplied by the central engine, the double pulses have separate physical parameters, except for the number density of the surrounding interstellar medium (ISM). Their independent radio afterglows are integrated by the ground detectors to form the rebrightening phenomenon. In this Letter, we firstly simulate diverse rebrightening light curves under consideration of different and independent physical parameters. Using this assumption, we also give our best fit to the radio afterglow of GRB 970508 at three frequencies of 1.43, 4.86, and 8.46 GHz. We suggest that the central engine may be active continuously at a timescale longer than that of a typical GRB duration time as many authors have suggested (e.g., Zhang et al., Astrophys. J. 787:66, 2014; Gao and Mészáros, Astrophys. J. 802:90, 2015), and that it may supply enough energy to cause the long-lasting rebrightenings observed in some GRB afterglows.

  10. Measurement-device-independent quantum cryptography

    DOE PAGES

    Xu, Feihu; Curty, Marcos; Qi, Bing; ...

    2014-12-18

    In theory, quantum key distribution (QKD) provides information-theoretic security based on the laws of physics. Owing to the imperfections of real-life implementations, however, there is a big gap between the theory and practice of QKD, which has been recently exploited by several quantum hacking activities. To fill this gap, a novel approach, called measurement-device-independent QKD (mdiQKD), has been proposed. In addition, it can remove all side-channels from the measurement unit, arguably the most vulnerable part in QKD systems, thus offering a clear avenue toward secure QKD realisations. In this study, we review the latest developments in the framework of mdiQKD,more » together with its assumptions, strengths, and weaknesses.« less

  11. Deterministic MDI QKD with two secret bits per shared entangled pair

    NASA Astrophysics Data System (ADS)

    Zebboudj, Sofia; Omar, Mawloud

    2018-03-01

    Although quantum key distribution schemes have been proven theoretically secure, they are based on assumptions about the devices that are not yet satisfied with today's technology. The measurement-device-independent scheme has been proposed to shorten the gap between theory and practice by removing all detector side-channel attacks. On the other hand, two-way quantum key distribution schemes have been proposed to raise the secret key generation rate. In this paper, we propose a new quantum key distribution scheme able to achieve a relatively high secret key generation rate based on two-way quantum key distribution that also inherits the robustness of the measurement-device-independent scheme against detector side-channel attacks.

  12. Insecurity of Detector-Device-Independent Quantum Key Distribution.

    PubMed

    Sajeed, Shihan; Huang, Anqi; Sun, Shihai; Xu, Feihu; Makarov, Vadim; Curty, Marcos

    2016-12-16

    Detector-device-independent quantum key distribution (DDI-QKD) held the promise of being robust to detector side channels, a major security loophole in quantum key distribution (QKD) implementations. In contrast to what has been claimed, however, we demonstrate that the security of DDI-QKD is not based on postselected entanglement, and we introduce various eavesdropping strategies that show that DDI-QKD is in fact insecure against detector side-channel attacks as well as against other attacks that exploit devices' imperfections of the receiver. Our attacks are valid even when the QKD apparatuses are built by the legitimate users of the system themselves, and thus, free of malicious modifications, which is a key assumption in DDI-QKD.

  13. Effects of various assumptions on the calculated liquid fraction in isentropic saturated equilibrium expansions

    NASA Technical Reports Server (NTRS)

    Bursik, J. W.; Hall, R. M.

    1980-01-01

    The saturated equilibrium expansion approximation for two phase flow often involves ideal-gas and latent-heat assumptions to simplify the solution procedure. This approach is well documented by Wegener and Mack and works best at low pressures where deviations from ideal-gas behavior are small. A thermodynamic expression for liquid mass fraction that is decoupled from the equations of fluid mechanics is used to compare the effects of the various assumptions on nitrogen-gas saturated equilibrium expansion flow starting at 8.81 atm, 2.99 atm, and 0.45 atm, which are conditions representative of transonic cryogenic wind tunnels. For the highest pressure case, the entire set of ideal-gas and latent-heat assumptions are shown to be in error by 62 percent for the values of heat capacity and latent heat. An approximation of the exact, real-gas expression is also developed using a constant, two phase isentropic expansion coefficient which results in an error of only 2 percent for the high pressure case.

  14. A dualist analysis of abortion: personhood and the concept of self qua experiential subject.

    PubMed

    Himma, K E

    2005-01-01

    There is no issue more central to the abortion debate than the controversial issue of whether the fetus is a moral person. Abortion-rights opponents almost universally claim that abortion is murder and should be legally prohibited because the fetus is a moral person at the moment of conception. Abortion-rights proponents almost universally deny the crucial assumption that the fetus is a person; on their view, whatever moral disvalue abortion involves does not rise to the level of murder and hence does not rise to the level of something that should be legally prohibited. In this essay, I argue that, under dualist assumptions about the nature of mind, the fetus is not a person until brain activity has begun.(i) First, I argue it is a necessary condition for a thing to be a moral person that it is (or has) a self. Second, I argue it is a necessary condition for a fetus to be (or have) a self, under dualist assumptions, that there has been some electrical activity in the brain. I conclude that a dualist can take the position that abortion ought to be legally permitted at least until the beginning of brain activity in the fetus.iI make no attempt to determine what conditions are sufficient for moral personhood; for this reason, the relevant claim about personhood is purely negative.

  15. Spontaneous bimanual independence during parallel tapping and sawing.

    PubMed

    Starke, Sandra Dorothee; Baber, Chris

    2017-01-01

    The performance of complex polyrhythms-rhythms where the left and right hand move at different rates-is usually the province of highly trained individuals. However, studies in which hand movement is guided haptically show that even novices can perform polyrhythms with no or only brief training. In this study, we investigated whether novices are able to tap with one hand by matching different rates of a metronome while sawing with the other hand. This experiment was based on the assumption that saw movement is controlled consistently at a predictable rate without the need for paying primary attention to it. It would follow that consciously matching different stipulated metronome rates with the other hand would result in the spontaneous performance of polyrhythms. Six experimental conditions were randomised: single handed tapping and sawing as well as four bimanual conditions with expected ratios of 1:1 (performed with and without matching a metronome) as well as 3:4 and 4:3 (performed matching a metronome). Results showed that participants executed the saw movement at a consistent cycle duration of 0.44 [0.20] s to 0.51 [0.19] s across single and bimanual conditions, with no significant effect of the condition on the cycle duration (p = 0.315). Similarly, free tapping was executed at a cycle duration of 0.48 [0.22] s. In the bimanual conditions, we found that for a ratio of 4:3 (4 taps against 3 sawing cycles per measure), the observed and predicted ratio of 0.75 were not significantly different (p = 0.369), supporting our hypothesis of the spontaneous adoption of polyrhythms. However, for a ratio of 3:4 (3 taps against 4 sawing cycles per measure), the observed and predicted ratio differed (p = 0.016), with a trend towards synchronisation. Our findings show that bimanual independence when performing complex polyrhythms can in principle be achieved if the movement of one hand can be performed without paying much-if any-attention to it. In this paradigm, small rhythmic arm movements are possibly driven by an intrinsic timing which leads to spontaneous convergence on a cycle duration of around 0.5 s, while the movement of the other hand can be controlled consciously to occur at desired rates.

  16. Elucidating the functional relationship between working memory capacity and psychometric intelligence: a fixed-links modeling approach for experimental repeated-measures designs.

    PubMed

    Thomas, Philipp; Rammsayer, Thomas; Schweizer, Karl; Troche, Stefan

    2015-01-01

    Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken together, these two latent variables explained the same portion of variance of Gf as a single latent variable obtained by traditional CFA (β = .65) indicating that traditional CFA causes an overestimation of the effective relationship between WMC and Gf. Thus, fixed-links modeling provides a feasible method for a more valid investigation of the functional relationship between specific constructs.

  17. Spontaneous bimanual independence during parallel tapping and sawing

    PubMed Central

    Baber, Chris

    2017-01-01

    The performance of complex polyrhythms—rhythms where the left and right hand move at different rates—is usually the province of highly trained individuals. However, studies in which hand movement is guided haptically show that even novices can perform polyrhythms with no or only brief training. In this study, we investigated whether novices are able to tap with one hand by matching different rates of a metronome while sawing with the other hand. This experiment was based on the assumption that saw movement is controlled consistently at a predictable rate without the need for paying primary attention to it. It would follow that consciously matching different stipulated metronome rates with the other hand would result in the spontaneous performance of polyrhythms. Six experimental conditions were randomised: single handed tapping and sawing as well as four bimanual conditions with expected ratios of 1:1 (performed with and without matching a metronome) as well as 3:4 and 4:3 (performed matching a metronome). Results showed that participants executed the saw movement at a consistent cycle duration of 0.44 [0.20] s to 0.51 [0.19] s across single and bimanual conditions, with no significant effect of the condition on the cycle duration (p = 0.315). Similarly, free tapping was executed at a cycle duration of 0.48 [0.22] s. In the bimanual conditions, we found that for a ratio of 4:3 (4 taps against 3 sawing cycles per measure), the observed and predicted ratio of 0.75 were not significantly different (p = 0.369), supporting our hypothesis of the spontaneous adoption of polyrhythms. However, for a ratio of 3:4 (3 taps against 4 sawing cycles per measure), the observed and predicted ratio differed (p = 0.016), with a trend towards synchronisation. Our findings show that bimanual independence when performing complex polyrhythms can in principle be achieved if the movement of one hand can be performed without paying much—if any—attention to it. In this paradigm, small rhythmic arm movements are possibly driven by an intrinsic timing which leads to spontaneous convergence on a cycle duration of around 0.5 s, while the movement of the other hand can be controlled consciously to occur at desired rates. PMID:28542581

  18. Nonlinear VLF Wave Physics in the Radiation Belts

    NASA Astrophysics Data System (ADS)

    Crabtree, C. E.; Tejero, E. M.; Ganguli, G.; Mithaiwala, M.; Rudakov, L.; Hospodarsky, G. B.; Kletzing, C.

    2014-12-01

    Electromagnetic VLF waves, such as whistler mode waves, both control the lifetime of trapped electrons in the radiation belts by pitch-angle scattering and are responsible for the energization of electrons during storms. Traditional approaches to understanding the influence of waves on trapped electrons have assumed that the wave characteristics (frequency spectrum, wave-normal angle distribution, etc.) were both stationary in time and amplitude independent from event to event. In situ data from modern satellite missions, such as the Van Allen probes, are showing that this assumption may not be justified. In addition, recent theoretical results [Crabtree et al. 2012] show that the threshold for nonlinear wave scattering can often be met by naturally occurring VLF waves in the magnetosphere, with wave magnetic fields of the order of 50-100 pT inside the plasmapause. Nonlinear wave scattering (Nonlinear Landau Damping) is an amplitude dependent mechanism that can strongly alter VLF wave propagation [Ganguli et al. 2010], primarily by altering the direction of propagation. Laboratory results have confirmed the dramatic change in propagation direction when the pump wave has sufficient amplitude to exceed the nonlinear threshold [Tejero et al. 2014]. Nonlinear scattering can alter the macroscopic dynamics of waves in the radiation belts leading to the formation of a long-lasting wave-cavity [Crabtree et al. 2012] and, when amplification is present, a multi-pass amplifier [Ganguli et al., 2012]. Such nonlinear wave effects can dramatically reduce electron lifetimes. Nonlinear wave dynamics such as these occur when there are more than one wave present, such a condition necessarily violates the assumption of traditional wave-normal analysis [Santolik et al., 2003] which rely on the plane wave assumption. To investigate nonlinear wave dynamics using modern in situ data we apply the maximum entropy method [Skilling and Bryan, 1984] to solve for the wave distribution function [Storey and Lefeuvre, 1979] to yield the power distribution as a function of wave-normal angle and local azimuthal angle. We have validated this technique in the NRL space chamber and applied this methodology to Van Allen probe data to demonstrate that traditional wave-normal analaysis can give misleading results when multiple waves are present.

  19. Assessing the Role of the 'Unity Assumption' on Multisensory Integration: A Review.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2017-01-01

    There has been longstanding interest from both experimental psychologists and cognitive neuroscientists in the potential modulatory role of various top-down factors on multisensory integration/perception in humans. One such top-down influence, often referred to in the literature as the 'unity assumption,' is thought to occur in those situations in which an observer considers that various of the unisensory stimuli that they have been presented with belong to one and the same object or event (Welch and Warren, 1980). Here, we review the possible factors that may lead to the emergence of the unity assumption. We then critically evaluate the evidence concerning the consequences of the unity assumption from studies of the spatial and temporal ventriloquism effects, from the McGurk effect, and from the Colavita visual dominance paradigm. The research that has been published to date using these tasks provides support for the claim that the unity assumption influences multisensory perception under at least a subset of experimental conditions. We then consider whether the notion has been superseded in recent years by the introduction of priors in Bayesian causal inference models of human multisensory perception. We suggest that the prior of common cause (that is, the prior concerning whether multisensory signals originate from the same source or not) offers the most useful way to quantify the unity assumption as a continuous cognitive variable.

  20. The competing risks Cox model with auxiliary case covariates under weaker missing-at-random cause of failure.

    PubMed

    Nevo, Daniel; Nishihara, Reiko; Ogino, Shuji; Wang, Molin

    2017-08-04

    In the analysis of time-to-event data with multiple causes using a competing risks Cox model, often the cause of failure is unknown for some of the cases. The probability of a missing cause is typically assumed to be independent of the cause given the time of the event and covariates measured before the event occurred. In practice, however, the underlying missing-at-random assumption does not necessarily hold. Motivated by colorectal cancer molecular pathological epidemiology analysis, we develop a method to conduct valid analysis when additional auxiliary variables are available for cases only. We consider a weaker missing-at-random assumption, with missing pattern depending on the observed quantities, which include the auxiliary covariates. We use an informative likelihood approach that will yield consistent estimates even when the underlying model for missing cause of failure is misspecified. The superiority of our method over naive methods in finite samples is demonstrated by simulation study results. We illustrate the use of our method in an analysis of colorectal cancer data from the Nurses' Health Study cohort, where, apparently, the traditional missing-at-random assumption fails to hold.

  1. NMR studies of excluded volume interactions in peptide dendrimers.

    PubMed

    Sheveleva, Nadezhda N; Markelov, Denis A; Vovk, Mikhail A; Mikhailova, Maria E; Tarasenko, Irina I; Neelov, Igor M; Lähderanta, Erkki

    2018-06-11

    Peptide dendrimers are good candidates for diverse biomedical applications due to their biocompatibility and low toxicity. The local orientational mobility of groups with different radial localization inside dendrimers is important characteristic for drug and gene delivery, synthesis of nanoparticles, and other specific purposes. In this paper we focus on the validation of two theoretical assumptions for dendrimers: (i) independence of NMR relaxations on excluded volume effects and (ii) similarity of mobilities of side and terminal segments of dendrimers. For this purpose we study 1 H NMR spin-lattice relaxation time, T 1H , of two similar peptide dendrimers of the second generation, with and without side fragments in their inner segments. Temperature dependences of 1/T 1H in the temperature range from 283 to 343 K were measured for inner and terminal groups of the dendrimers dissolved in deuterated water. We have shown that the 1/T 1H temperature dependences of inner groups for both dendrimers (with and without side fragments) practically coincide despite different densities of atoms inside these dendrimers. This result confirms the first theoretical assumption. The second assumption is confirmed by the 1/T 1H temperature dependences of terminal groups which are similar for both dendrimers.

  2. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  3. Can Newton's Third Law Be "Derived" from the Second?

    NASA Astrophysics Data System (ADS)

    Gangopadhyaya, Asim; Harrington, James

    2017-04-01

    Newton's laws have engendered much discussion over several centuries. Today, the internet is awash with a plethora of information on this topic. We find many references to Newton's laws, often discussions of various types of misunderstandings and ways to explain them. Here we present an intriguing example that shows an assumption hidden in Newton's third law that is often overlooked. As is well known, the first law defines an inertial frame of reference and the second law determines the acceleration of a particle in such a frame due to an external force. The third law describes forces exerted on each other in a two-particle system, and allows us to extend the second law to a system of particles. Students are often taught that the three laws are independent. Here we present an example that challenges this assumption. At first glance, it seems to show that, at least for a special case, the third law follows from the second law. However, a careful examination of the assumptions demonstrates that is not quite the case. Ultimately, the example does illustrate the significance of the concept of mass in linking Newton's dynamical principles.

  4. Megahertz-Rate Semi-Device-Independent Quantum Random Number Generators Based on Unambiguous State Discrimination

    NASA Astrophysics Data System (ADS)

    Brask, Jonatan Bohr; Martin, Anthony; Esposito, William; Houlmann, Raphael; Bowles, Joseph; Zbinden, Hugo; Brunner, Nicolas

    2017-05-01

    An approach to quantum random number generation based on unambiguous quantum state discrimination is developed. We consider a prepare-and-measure protocol, where two nonorthogonal quantum states can be prepared, and a measurement device aims at unambiguously discriminating between them. Because the states are nonorthogonal, this necessarily leads to a minimal rate of inconclusive events whose occurrence must be genuinely random and which provide the randomness source that we exploit. Our protocol is semi-device-independent in the sense that the output entropy can be lower bounded based on experimental data and a few general assumptions about the setup alone. It is also practically relevant, which we demonstrate by realizing a simple optical implementation, achieving rates of 16.5 Mbits /s . Combining ease of implementation, a high rate, and a real-time entropy estimation, our protocol represents a promising approach intermediate between fully device-independent protocols and commercial quantum random number generators.

  5. Finite-key analysis for measurement-device-independent quantum key distribution.

    PubMed

    Curty, Marcos; Xu, Feihu; Cui, Wei; Lim, Charles Ci Wen; Tamaki, Kiyoshi; Lo, Hoi-Kwong

    2014-04-29

    Quantum key distribution promises unconditionally secure communications. However, as practical devices tend to deviate from their specifications, the security of some practical systems is no longer valid. In particular, an adversary can exploit imperfect detectors to learn a large part of the secret key, even though the security proof claims otherwise. Recently, a practical approach--measurement-device-independent quantum key distribution--has been proposed to solve this problem. However, so far its security has only been fully proven under the assumption that the legitimate users of the system have unlimited resources. Here we fill this gap and provide a rigorous security proof against general attacks in the finite-key regime. This is obtained by applying large deviation theory, specifically the Chernoff bound, to perform parameter estimation. For the first time we demonstrate the feasibility of long-distance implementations of measurement-device-independent quantum key distribution within a reasonable time frame of signal transmission.

  6. CO component estimation based on the independent component analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independentmore » component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.« less

  7. A SAS macro for testing differences among three or more independent groups using Kruskal-Wallis and Nemenyi tests.

    PubMed

    Liu, Yuewei; Chen, Weihong

    2012-02-01

    As a nonparametric method, the Kruskal-Wallis test is widely used to compare three or more independent groups when an ordinal or interval level of data is available, especially when the assumptions of analysis of variance (ANOVA) are not met. If the Kruskal-Wallis statistic is statistically significant, Nemenyi test is an alternative method for further pairwise multiple comparisons to locate the source of significance. Unfortunately, most popular statistical packages do not integrate the Nemenyi test, which is not easy to be calculated by hand. We described the theory and applications of the Kruskal-Wallis and Nemenyi tests, and presented a flexible SAS macro to implement the two tests. The SAS macro was demonstrated by two examples from our cohort study in occupational epidemiology. It provides a useful tool for SAS users to test the differences among three or more independent groups using a nonparametric method.

  8. A parameterization of the passive layer of a quasigeostrophic flow in a continuously-stratified ocean

    NASA Astrophysics Data System (ADS)

    Benilov, E. S.

    2018-05-01

    This paper examines quasigeostrophic flows in an ocean that can be subdivided into an upper active layer (AL) and a lower passive layer (PL), with the flow and density stratification mainly confined to the former. Under this assumption, an asymptotic model is derived parameterizing the effect of the PL on the AL. The model depends only on the PL's depth, whereas its Väisälä-Brunt frequency turns out to be unimportant (as long as it is small). Under an additional assumption-that the potential vorticity field in the PL is well-diffused and, thus, uniform-the derived model reduces to a simple boundary condition. This condition is to be applied at the AL/PL interface, after which the PL can be excluded from consideration.

  9. Bartnik’s splitting conjecture and Lorentzian Busemann function

    NASA Astrophysics Data System (ADS)

    Amini, Roya; Sharifzadeh, Mehdi; Bahrampour, Yousof

    2018-05-01

    In 1988 Bartnik posed the splitting conjecture about the cosmological space-time. This conjecture has been proved by several people, with different approaches and by using some additional assumptions such as ‘S-ray condition’ and ‘level set condition’. It is known that the ‘S-ray condition’ yields the ‘level set condition’. We have proved that the two are indeed equivalent, by giving a different proof under the assumption of the ‘level set condition’. In addition, we have shown several properties of the cosmological space-time, under the presence of the ‘level set condition’. Finally we have provided a proof of the conjecture under a different assumption on the cosmological space-time. But we first prove some results without the timelike convergence condition which help us to state our proofs.

  10. Children's Attitudes toward Race and Gender.

    ERIC Educational Resources Information Center

    Warner, Juliet L.

    An implicit assumption in the majority of literature looking at development of prejudice in children is that race prejudice and sex prejudice are equivalent across groups; that is, sex bias is not conditional on race, and likewise race bias is not conditional on sex bias of the child. However, Warner, Fishbein, Ritchey and Case (2001) found strong…

  11. VESL for Heating and Air Conditioning: A Competency-based Curriculum Guide. Project OSCAER.

    ERIC Educational Resources Information Center

    Lopez-Valadez, Jeanne, Ed.; Pankratz, David, Ed.

    This guide is intended for vocational educators developing the vocational English as a second language (VESL) component of a course in heating and air conditioning. The introductory section examines assumptions about second language learning and instruction and VESL classes, local adaptations of the curriculum, and sample VESL lessons. The chapter…

  12. Mapped Plot Patch Size Estimates

    Treesearch

    Paul C. Van Deusen

    2005-01-01

    This paper demonstrates that the mapped plot design is relatively easy to analyze and describes existing formulas for mean and variance estimators. New methods are developed for using mapped plots to estimate average patch size of condition classes. The patch size estimators require assumptions about the shape of the condition class, limiting their utility. They may...

  13. On the assumption of vanishing temperature fluctuations at the wall for heat transfer modeling

    NASA Technical Reports Server (NTRS)

    Sommer, T. P.; So, R. M. C.; Zhang, H. S.

    1993-01-01

    Boundary conditions for fluctuating wall temperature are required for near-wall heat transfer modeling. However, their correct specifications for arbitrary thermal boundary conditions are not clear. The conventional approach is to assume zero fluctuating wall temperature or zero gradient for the temperature variance at the wall. These are idealized specifications and the latter condition could lead to an ill posed problem for fully-developed pipe and channel flows. In this paper, the validity and extent of the zero fluctuating wall temperature condition for heat transfer calculations is examined. The approach taken is to assume a Taylor expansion in the wall normal coordinate for the fluctuating temperature that is general enough to account for both zero and non-zero value at the wall. Turbulent conductivity is calculated from the temperature variance and its dissipation rate. Heat transfer calculations assuming both zero and non-zero fluctuating wall temperature reveal that the zero fluctuating wall temperature assumption is in general valid. The effects of non-zero fluctuating wall temperature are limited only to a very small region near the wall.

  14. On the existence and stability of liquid water on the surface of mars today.

    PubMed

    Kuznetz, L H; Gan, D C

    2002-01-01

    The recent discovery of high concentrations of hydrogen just below the surface of Mars' polar regions by Mars Odyssey has enlivened the debate about past or present life on Mars. The prevailing assumption prior to the discovery was that the liquid water essential for its existence is absent. That assumption was based largely on the calculation of heat and mass transfer coefficients or theoretical climate models. This research uses an experimental approach to determine the feasibility of liquid water under martian conditions, setting the stage for a more empirical approach to the question of life on Mars. Experiments were conducted in three parts: Liquid water's existence was confirmed by droplets observed under martian conditions in part 1; the evolution of frost melting on the surface of various rocks under martian conditions was observed in part 2; and the evaporation rate of water in Petri dishes under Mars-like conditions was determined and compared with the theoretical predictions of various investigators in part 3. The results led to the conclusion that liquid water can be stable for extended periods of time on the martian surface under present-day conditions.

  15. On the existence and stability of liquid water on the surface of mars today

    NASA Technical Reports Server (NTRS)

    Kuznetz, L. H.; Gan, D. C.

    2002-01-01

    The recent discovery of high concentrations of hydrogen just below the surface of Mars' polar regions by Mars Odyssey has enlivened the debate about past or present life on Mars. The prevailing assumption prior to the discovery was that the liquid water essential for its existence is absent. That assumption was based largely on the calculation of heat and mass transfer coefficients or theoretical climate models. This research uses an experimental approach to determine the feasibility of liquid water under martian conditions, setting the stage for a more empirical approach to the question of life on Mars. Experiments were conducted in three parts: Liquid water's existence was confirmed by droplets observed under martian conditions in part 1; the evolution of frost melting on the surface of various rocks under martian conditions was observed in part 2; and the evaporation rate of water in Petri dishes under Mars-like conditions was determined and compared with the theoretical predictions of various investigators in part 3. The results led to the conclusion that liquid water can be stable for extended periods of time on the martian surface under present-day conditions.

  16. To predict the niche, model colonization and extinction

    USGS Publications Warehouse

    Yackulic, Charles B.; Nichols, James D.; Reid, Janice; Der, Ricky

    2015-01-01

    Ecologists frequently try to predict the future geographic distributions of species. Most studies assume that the current distribution of a species reflects its environmental requirements (i.e., the species' niche). However, the current distributions of many species are unlikely to be at equilibrium with the current distribution of environmental conditions, both because of ongoing invasions and because the distribution of suitable environmental conditions is always changing. This mismatch between the equilibrium assumptions inherent in many analyses and the disequilibrium conditions in the real world leads to inaccurate predictions of species' geographic distributions and suggests the need for theory and analytical tools that avoid equilibrium assumptions. Here, we develop a general theory of environmental associations during periods of transient dynamics. We show that time-invariant relationships between environmental conditions and rates of local colonization and extinction can produce substantial temporal variation in occupancy–environment relationships. We then estimate occupancy–environment relationships during three avian invasions. Changes in occupancy–environment relationships over time differ among species but are predicted by dynamic occupancy models. Since estimates of the occupancy–environment relationships themselves are frequently poor predictors of future occupancy patterns, research should increasingly focus on characterizing how rates of local colonization and extinction vary with environmental conditions.

  17. Initial condition of stochastic self-assembly

    NASA Astrophysics Data System (ADS)

    Davis, Jason K.; Sindi, Suzanne S.

    2016-02-01

    The formation of a stable protein aggregate is regarded as the rate limiting step in the establishment of prion diseases. In these systems, once aggregates reach a critical size the growth process accelerates and thus the waiting time until the appearance of the first critically sized aggregate is a key determinant of disease onset. In addition to prion diseases, aggregation and nucleation is a central step of many physical, chemical, and biological process. Previous studies have examined the first-arrival time at a critical nucleus size during homogeneous self-assembly under the assumption that at time t =0 the system was in the all-monomer state. However, in order to compare to in vivo biological experiments where protein constituents inherited by a newly born cell likely contain intermediate aggregates, other possibilities must be considered. We consider one such possibility by conditioning the unique ergodic size distribution on subcritical aggregate sizes; this least-informed distribution is then used as an initial condition. We make the claim that this initial condition carries fewer assumptions than an all-monomer one and verify that it can yield significantly different averaged waiting times relative to the all-monomer condition under various models of assembly.

  18. Efficiency at maximum power output of linear irreversible Carnot-like heat engines.

    PubMed

    Wang, Yang; Tu, Z C

    2012-01-01

    The efficiency at maximum power output of linear irreversible Carnot-like heat engines is investigated based on the assumption that the rate of irreversible entropy production of the working substance in each "isothermal" process is a quadratic form of the heat exchange rate between the working substance and the reservoir. It is found that the maximum power output corresponds to minimizing the irreversible entropy production in two isothermal processes of the Carnot-like cycle, and that the efficiency at maximum power output has the form η(mP)=η(C)/(2-γη(C)), where η(C) is the Carnot efficiency, while γ depends on the heat transfer coefficients between the working substance and two reservoirs. The value of η(mP) is bounded between η(-)≡η(C)/2 and η(+)≡η(C)/(2-η(C)). These results are consistent with those obtained by Chen and Yan [J. Chem. Phys. 90, 3740 (1989)] based on the endoreversible assumption, those obtained by Esposito et al. [Phys. Rev. Lett. 105, 150603 (2010)] based on the low-dissipation assumption, and those obtained by Schmiedl and Seifert [Europhys. Lett. 81, 20003 (2008)] for stochastic heat engines which in fact also satisfy the low-dissipation assumption. Additionally, we find that the endoreversible assumption happens to hold for Carnot-like heat engines operating at the maximum power output based on our fundamental assumption, and that the Carnot-like heat engines that we focused on do not strictly satisfy the low-dissipation assumption, which implies that the low-dissipation assumption or our fundamental assumption is a sufficient but non-necessary condition for the validity of η(mP)=η(C)/(2-γη(C)) as well as the existence of two bounds, η(-)≡η(C)/2 and η(+)≡η(C)/(2-η(C)). © 2012 American Physical Society

  19. Efficiency at maximum power output of linear irreversible Carnot-like heat engines

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Tu, Z. C.

    2012-01-01

    The efficiency at maximum power output of linear irreversible Carnot-like heat engines is investigated based on the assumption that the rate of irreversible entropy production of the working substance in each “isothermal” process is a quadratic form of the heat exchange rate between the working substance and the reservoir. It is found that the maximum power output corresponds to minimizing the irreversible entropy production in two isothermal processes of the Carnot-like cycle, and that the efficiency at maximum power output has the form ηmP=ηC/(2-γηC), where ηC is the Carnot efficiency, while γ depends on the heat transfer coefficients between the working substance and two reservoirs. The value of ηmP is bounded between η-≡ηC/2 and η+≡ηC/(2-ηC). These results are consistent with those obtained by Chen and Yan [J. Chem. Phys.JCPSA60021-960610.1063/1.455832 90, 3740 (1989)] based on the endoreversible assumption, those obtained by Esposito [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.105.150603 105, 150603 (2010)] based on the low-dissipation assumption, and those obtained by Schmiedl and Seifert [Europhys. Lett.EULEEJ0295-507510.1209/0295-5075/81/20003 81, 20003 (2008)] for stochastic heat engines which in fact also satisfy the low-dissipation assumption. Additionally, we find that the endoreversible assumption happens to hold for Carnot-like heat engines operating at the maximum power output based on our fundamental assumption, and that the Carnot-like heat engines that we focused on do not strictly satisfy the low-dissipation assumption, which implies that the low-dissipation assumption or our fundamental assumption is a sufficient but non-necessary condition for the validity of ηmP=ηC/(2-γηC) as well as the existence of two bounds, η-≡ηC/2 and η+≡ηC/(2-ηC).

  20. Discrete Neural Signatures of Basic Emotions.

    PubMed

    Saarimäki, Heini; Gotsopoulos, Athanasios; Jääskeläinen, Iiro P; Lampinen, Jouko; Vuilleumier, Patrik; Hari, Riitta; Sams, Mikko; Nummenmaa, Lauri

    2016-06-01

    Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Modeling enzymatic hydrolysis of lignocellulosic substrates using confocal fluorescence microscopy I: filter paper cellulose.

    PubMed

    Luterbacher, Jeremy S; Moran-Mirabal, Jose M; Burkholder, Eric W; Walker, Larry P

    2015-01-01

    Enzymatic hydrolysis is one of the critical steps in depolymerizing lignocellulosic biomass into fermentable sugars for further upgrading into fuels and/or chemicals. However, many studies still rely on empirical trends to optimize enzymatic reactions. An improved understanding of enzymatic hydrolysis could allow research efforts to follow a rational design guided by an appropriate theoretical framework. In this study, we present a method to image cellulosic substrates with complex three-dimensional structure, such as filter paper, undergoing hydrolysis under conditions relevant to industrial saccharification processes (i.e., temperature of 50°C, using commercial cellulolytic cocktails). Fluorescence intensities resulting from confocal images were used to estimate parameters for a diffusion and reaction model. Furthermore, the observation of a relatively constant bound enzyme fluorescence signal throughout hydrolysis supported our modeling assumption regarding the structure of biomass during hydrolysis. The observed behavior suggests that pore evolution can be modeled as widening of infinitely long slits. The resulting model accurately predicts the concentrations of soluble carbohydrates obtained from independent saccharification experiments conducted in bulk, demonstrating its relevance to biomass conversion work. © 2014 Wiley Periodicals, Inc.

  2. Warped product space-times

    NASA Astrophysics Data System (ADS)

    An, Xinliang; Wong, Willie Wai Yeung

    2018-01-01

    Many classical results in relativity theory concerning spherically symmetric space-times have easy generalizations to warped product space-times, with a two-dimensional Lorentzian base and arbitrary dimensional Riemannian fibers. We first give a systematic presentation of the main geometric constructions, with emphasis on the Kodama vector field and the Hawking energy; the construction is signature independent. This leads to proofs of general Birkhoff-type theorems for warped product manifolds; our theorems in particular apply to situations where the warped product manifold is not necessarily Einstein, and thus can be applied to solutions with matter content in general relativity. Next we specialize to the Lorentzian case and study the propagation of null expansions under the assumption of the dominant energy condition. We prove several non-existence results relating to the Yamabe class of the fibers, in the spirit of the black-hole topology theorem of Hawking–Galloway–Schoen. Finally we discuss the effect of the warped product ansatz on matter models. In particular we construct several cosmological solutions to the Einstein–Euler equations whose spatial geometry is generally not isotropic.

  3. On the validity of within-nuclear-family genetic association analysis in samples of extended families.

    PubMed

    Bureau, Alexandre; Duchesne, Thierry

    2015-12-01

    Splitting extended families into their component nuclear families to apply a genetic association method designed for nuclear families is a widespread practice in familial genetic studies. Dependence among genotypes and phenotypes of nuclear families from the same extended family arises because of genetic linkage of the tested marker with a risk variant or because of familial specificity of genetic effects due to gene-environment interaction. This raises concerns about the validity of inference conducted under the assumption of independence of the nuclear families. We indeed prove theoretically that, in a conditional logistic regression analysis applicable to disease cases and their genotyped parents, the naive model-based estimator of the variance of the coefficient estimates underestimates the true variance. However, simulations with realistic effect sizes of risk variants and variation of this effect from family to family reveal that the underestimation is negligible. The simulations also show the greater efficiency of the model-based variance estimator compared to a robust empirical estimator. Our recommendation is therefore, to use the model-based estimator of variance for inference on effects of genetic variants.

  4. von Kármán–Howarth and Corrsin equations closure based on Lagrangian description of the fluid motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Divitiis, Nicola de, E-mail: n.dedivitiis@gmail.com

    A new approach to obtain the closure formulas for the von Kármán–Howarth and Corrsin equations is presented, which is based on the Lagrangian representation of the fluid motion, and on the Liouville theorem associated to the kinematics of a pair of fluid particles. This kinematics is characterized by the finite scale separation vector which is assumed to be statistically independent from the velocity field. Such assumption is justified by the hypothesis of fully developed turbulence and by the property that this vector varies much more rapidly than the velocity field. This formulation leads to the closure formulas of von Kármán–Howarthmore » and Corrsin equations in terms of longitudinal velocity and temperature correlations following a demonstration completely different with respect to the previous works. Some of the properties and the limitations of the closed equations are discussed. In particular, we show that the times of evolution of the developed kinetic energy and temperature spectra are finite quantities which depend on the initial conditions.« less

  5. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  6. Modeling frictional melt injection to constrain coseismic physical conditions

    NASA Astrophysics Data System (ADS)

    Sawyer, William J.; Resor, Phillip G.

    2017-07-01

    Pseudotachylyte, a fault rock formed through coseismic frictional melting, provides an important record of coseismic mechanics. In particular, injection veins formed at a high angle to the fault surface have been used to estimate rupture directivity, velocity, pulse length, stress drop, as well as slip weakening distance and wall rock stiffness. These studies have generally treated injection vein formation as a purely elastic process and have assumed that processes of melt generation, transport, and solidification have little influence on the final vein geometry. Using a pressurized crack model, an analytical approximation of injection vein formation based on dike intrusion, we find that the timescales of quenching and flow propagation may be similar for a subset of injection veins compiled from the Asbestos Mountain Fault, USA, Gole Larghe Fault Zone, Italy, and the Fort Foster Brittle Zone, USA under minimum melt temperature conditions. 34% of the veins are found to be flow limited, with a final geometry that may reflect cooling of the vein before it reaches an elastic equilibrium with the wall rock. Formation of these veins is a dynamic process whose behavior is not fully captured by the analytical approach. To assess the applicability of simplifying assumptions of the pressurized crack we employ a time-dependent finite-element model of injection vein formation that couples elastic deformation of the wall rock with the fluid dynamics and heat transfer of the frictional melt. This finite element model reveals that two basic assumptions of the pressurized crack model, self-similar growth and a uniform pressure gradient, are false. The pressurized crack model thus underestimates flow propagation time by 2-3 orders of magnitude. Flow limiting may therefore occur under a wider range of conditions than previously thought. Flow-limited veins may be recognizable in the field where veins have tapered profiles or smaller aspect ratios than expected. The occurrence and shape of injection veins can be coupled with modeling to provide an independent estimate of minimum melt temperature. Finally, the large aspect ratio observed for all three populations of injection veins may be best explained by a large reduction in stiffness associated with coseismic damage, as injection vein growth is likely to far exceed the lifetime of dynamic stresses at any location along a fault.

  7. General solutions for the oxidation kinetics of polymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillen, K.T.; Clough, R.L.; Wise, J.

    1996-08-01

    The simplest general kinetic schemes applicable to the oxidation of polymers are presented, discussed and analyzed in terms of the underlying kinetic assumptions. For the classic basic autoxidation scheme (BAS), which involves three bimolecular termination steps and is applicable mainly to unstabilized polymers, typical assumptions used singly or in groups include (1) long kinetic chain length, (2) a specific ratio of the termination rate constants and (3) insensitivity to the oxygen concentration (e.g., domination by a single termination step). Steady-state solutions for the rate of oxidation are given in terms of one, two, three, or four parameters, corresponding respectively tomore » three, two, one, or zero kinetic assumptions. The recently derived four-parameter solution predicts conditions yielding unusual dependencies of the oxidation rate on oxygen concentration and on initiation rate, as well as conditions leading to some unusual diffusion-limited oxidation profile shapes. For stabilized polymers, unimolecular termination schemes are typically more appropriate than bimolecular. Kinetics incorporating unimolecular termination reactions are shown to result in very simple oxidation expressions which have been experimentally verified for both radiation-initiated oxidation of an EPDM and thermoxidative degradation of nitrile and chloroprene elastomers.« less

  8. Identifying Autocorrelation Generated by Various Error Processes in Interrupted Time-Series Regression Designs: A Comparison of AR1 and Portmanteau Tests

    ERIC Educational Resources Information Center

    Huitema, Bradley E.; McKean, Joseph W.

    2007-01-01

    Regression models used in the analysis of interrupted time-series designs assume statistically independent errors. Four methods of evaluating this assumption are the Durbin-Watson (D-W), Huitema-McKean (H-M), Box-Pierce (B-P), and Ljung-Box (L-B) tests. These tests were compared with respect to Type I error and power under a wide variety of error…

  9. Approximations of Two-Attribute Utility Functions

    DTIC Science & Technology

    1976-09-01

    preferred to") be a bina-zy relation on the set • of simple probability measures or ’gambles’ defined on a set T of consequences. Throughout this study it...simplifying independence assumptions. Although there are several approaches to this problem, the21 present study will focus on approximations of u... study will elicit additional interest in the topic. 2. REMARKS ON APPROXIMATION THEORY This section outlines a few basic ideas of approximation theory

  10. Comments on Professor Lortie's Paper Entitled "The Cracked Cake of Educational Custom and Emerging Issues in Evaluation." Center for the Study of Evaluation of Instructional Programs Occasional Report No. 21.

    ERIC Educational Resources Information Center

    Gage, N. L.

    This commentary takes no serious exception to Professor Lortie's conclusions about educational changes and issues in evaluation, but it examines certain assumptions. It questions the implications that each school district needs independent evaluation and that evaluation should be applied at the end of a given educational program, and asks what…

  11. A Comparison of Open Space and Traditional Classroom Structures According to Independence Measures in Children, Teachers' Awareness of Children's Personality Variables, and Children's Academic Progress. Final Report.

    ERIC Educational Resources Information Center

    Grapko, Michael F.

    The project attempted to discern the possible resulting effects on children and teachers from major structural changes in the design of classroom space. Three assumptions were tested: (1) children will assume greater initiative in resource use, (2) teachers get to know the children better, and (3) team teaching in open space classrooms contributes…

  12. SOA formation by biogenic and carbonyl compounds: data evaluation and application.

    PubMed

    Ervens, Barbara; Kreidenweis, Sonia M

    2007-06-01

    The organic fraction of atmospheric aerosols affects the physical and chemical properties of the particles and their role in the climate system. Current models greatly underpredict secondary organic aerosol (SOA) mass. Based on a compilation of literature studies that address SOA formation, we discuss different parameters that affect the SOA formation efficiency of biogenic compounds (alpha-pinene, isoprene) and aliphatic aldehydes (glyoxal, hexanal, octanal, hexadienal). Applying a simple model, we find that the estimated SOA mass after one week of aerosol processing under typical atmospheric conditions is increased by a few microg m(-3) (low NO(x) conditions). Acid-catalyzed reactions can create > 50% more SOA mass than processes under neutral conditions; however, other parameters such as the concentration ratio of organics/NO(x), relative humidity, and absorbing mass are more significant. The assumption of irreversible SOA formation not limited by equilibrium in the particle phase or by depletion of the precursor leads to unrealistically high SOA masses for some of the assumptions we made (surface vs volume controlled processes).

  13. Facial Identity and Self-Perception: An Examination of Psychosocial Outcomes in Cosmetic Surgery Patients.

    PubMed

    Slavin, Benjamin; Beer, Jacob

    2017-06-01

    The psychosocial health of patients undergoing cosmetic procedures has often been linked to a host of pre-existing conditions, including the type of procedure being performed. Age, gender, and the psychological state of the patients also contribute to the perceived outcome. Specifically, the presence or absence of Body Dysmorphic Disorder (BDD) has been identified as an independent marker for unhappiness following cosmetic procedures.1 However, no study has, to our knowledge, identified a more precise indicator that is associated with higher rates of patient dissatisfaction from cosmetic procedure. This review identifies facial identity and self-perception as potential identifiers of future patient dissatisfaction with cosmetic procedures. Specifically, we believe that patients with a realistic facial identity and self-perception are more likely to be satisfied than those whose self-perceptions are distorted. Patients undergoing restorative procedures, including blepharoplasty, rhytidectomy, and liposuction, are more likely to have an increased outcome favorability rating than those undergoing type change procedures, such as rhinoplasty and breast augmentation. Age, which typically is an independent variable for satisfaction, tends to be associated with increased favorability ratings following cosmetic procedures. Female gender is a second variable associated with higher satisfaction. The authors believe that negative facial identity and self-perception are risk factors for patient dissatisfaction with cosmetic procedural outcomes. Based on this assumption, clinicians may want to focus on the face as a particular area of psychosocial concern.

    J Drugs Dermatol. 2017;16(6):617-620.

    .

  14. Causal analysis of ordinal treatments and binary outcomes under truncation by death.

    PubMed

    Wang, Linbo; Richardson, Thomas S; Zhou, Xiao-Hua

    2017-06-01

    It is common that in multi-arm randomized trials, the outcome of interest is "truncated by death," meaning that it is only observed or well-defined conditioning on an intermediate outcome. In this case, in addition to pairwise contrasts, the joint inference for all treatment arms is also of interest. Under a monotonicity assumption we present methods for both pairwise and joint causal analyses of ordinal treatments and binary outcomes in presence of truncation by death. We illustrate via examples the appropriateness of our assumptions in different scientific contexts.

  15. Lake Metabolism: Comparison of Lake Metabolic Rates Estimated from a Diel CO2- and the Common Diel O2-Technique.

    PubMed

    Peeters, Frank; Atamanchuk, Dariia; Tengberg, Anders; Encinas-Fernández, Jorge; Hofmann, Hilmar

    2016-01-01

    Lake metabolism is a key factor for the understanding of turnover of energy and of organic and inorganic matter in lake ecosystems. Long-term time series on metabolic rates are commonly estimated from diel changes in dissolved oxygen. Here we present long-term data on metabolic rates based on diel changes in total dissolved inorganic carbon (DIC) utilizing an open-water diel CO2-technique. Metabolic rates estimated with this technique and the traditional diel O2-technique agree well in alkaline Lake Illmensee (pH of ~8.5), although the diel changes in molar CO2 concentrations are much smaller than those of the molar O2 concentrations. The open-water diel CO2- and diel O2-techniques provide independent measures of lake metabolic rates that differ in their sensitivity to transport processes. Hence, the combination of both techniques can help to constrain uncertainties arising from assumptions on vertical fluxes due to gas exchange and turbulent diffusion. This is particularly important for estimates of lake respiration rates because these are much more sensitive to assumptions on gradients in vertical fluxes of O2 or DIC than estimates of lake gross primary production. Our data suggest that it can be advantageous to estimate respiration rates assuming negligible gradients in vertical fluxes rather than including gas exchange with the atmosphere but neglecting vertical mixing in the water column. During two months in summer the average lake net production was close to zero suggesting at most slightly autotrophic conditions. However, the lake emitted O2 and CO2 during the entire time period suggesting that O2 and CO2 emissions from lakes can be decoupled from the metabolism in the near surface layer.

  16. The impact of clustering of extreme European windstorm events on (re)insurance market portfolios

    NASA Astrophysics Data System (ADS)

    Mitchell-Wallace, Kirsten; Alvarez-Diaz, Teresa

    2010-05-01

    Traditionally the occurrence of windstorm loss events in Europe has been considered as independent. However, a number of significant losses close in space and time indicates that this assumption may need to be revised. Under particular atmospheric conditions multiple loss-causing cyclones can occur in succession, affecting similar geographic regions and, therefore, insurance markets. A notable example is of Lothar and Martin in France in December 1999. Although the existence of cyclone families is well-known by meteorologists, there has been limited research into occurrence of serial windstorms. However, climate modelling research is now providing the ability to explore the physical drivers of clustering, and to improve understanding of the hazard aspect of catastrophe modelling. While analytics tools, including catastrophe models, may incorporate assumptions regarding the influence of dependency through statistical means, the most recent research outputs provide a new strand of information with the potential to re-assess the probabilistic loss potential in light of clustering and to provide an additional view on probable maximum losses to windstorm-exposed portfolios across regions such as Northwest Europe. There is however, a need for the testing of these new techniques within operational (re)insurance applications, and this paper provide an overview of the most current clustering research, including the 2009 paper by Vitolo et. al., in relation to reinsurance risk modelling, and to assess the potential impact of such additional information on the overall risk assessment process. We examine the consequences of the serial clustering of extra-tropical cyclones demonstrated by Vitolo et al. (2009) from the perspective of a large European reinsurer, examining potential implications for: • Pricing • Accumulation And • Capital adequacy

  17. Characterization of Artifacts Introduced by the Empirical Volcano-Scan Atmospheric Correction Commonly Applied to CRISM and OMEGA Near-Infrared Spectra

    NASA Technical Reports Server (NTRS)

    Wiseman, S.M.; Arvidson, R.E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.; hide

    2014-01-01

    The empirical volcano-scan atmospheric correction is widely applied to Martian near infrared CRISM and OMEGA spectra between 1000 and 2600 nanometers to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the Martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nanometers, is caused by the inaccurate assumption that absorption coefficients of CO2 in the Martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.

  18. Characterization of artifacts introduced by the empirical volcano-scan atmospheric correction commonly applied to CRISM and OMEGA near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Wiseman, S. M.; Arvidson, R. E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.; McGuire, P. C.

    2016-05-01

    The empirical 'volcano-scan' atmospheric correction is widely applied to martian near infrared CRISM and OMEGA spectra between ∼1000 and ∼2600 nm to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano-scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nm, is caused by the inaccurate assumption that absorption coefficients of CO2 in the martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.

  19. Lake Metabolism: Comparison of Lake Metabolic Rates Estimated from a Diel CO2- and the Common Diel O2-Technique

    PubMed Central

    Peeters, Frank; Atamanchuk, Dariia; Tengberg, Anders; Encinas-Fernández, Jorge; Hofmann, Hilmar

    2016-01-01

    Lake metabolism is a key factor for the understanding of turnover of energy and of organic and inorganic matter in lake ecosystems. Long-term time series on metabolic rates are commonly estimated from diel changes in dissolved oxygen. Here we present long-term data on metabolic rates based on diel changes in total dissolved inorganic carbon (DIC) utilizing an open-water diel CO2-technique. Metabolic rates estimated with this technique and the traditional diel O2-technique agree well in alkaline Lake Illmensee (pH of ~8.5), although the diel changes in molar CO2 concentrations are much smaller than those of the molar O2 concentrations. The open-water diel CO2- and diel O2-techniques provide independent measures of lake metabolic rates that differ in their sensitivity to transport processes. Hence, the combination of both techniques can help to constrain uncertainties arising from assumptions on vertical fluxes due to gas exchange and turbulent diffusion. This is particularly important for estimates of lake respiration rates because these are much more sensitive to assumptions on gradients in vertical fluxes of O2 or DIC than estimates of lake gross primary production. Our data suggest that it can be advantageous to estimate respiration rates assuming negligible gradients in vertical fluxes rather than including gas exchange with the atmosphere but neglecting vertical mixing in the water column. During two months in summer the average lake net production was close to zero suggesting at most slightly autotrophic conditions. However, the lake emitted O2 and CO2 during the entire time period suggesting that O2 and CO2 emissions from lakes can be decoupled from the metabolism in the near surface layer. PMID:28002477

  20. Wash load and bed-material load transport in the Yellow River

    USGS Publications Warehouse

    Yang, C.T.; Simoes, F.J.M.

    2005-01-01

    It has been the conventional assumption that wash load is supply limited and is only indirectly related to the hydraulics of a river. Hydraulic engineers also assumed that bed-material load concentration is independent of wash load concentration. This paper provides a detailed analysis of the Yellow River sediment transport data to determine whether the above assumptions are true and whether wash load concentration can be computed from the original unit stream power formula and the modified unit stream power formula for sediment-laden flows. A systematic and thorough analysis of 1,160 sets of data collected from 9 gauging stations along the Middle and Lower Yellow River confirmed that the method suggested by the conjunctive use of the two formulas can be used to compute wash load, bed-material load, and total load in the Yellow River with accuracy. Journal of Hydraulic Engineering ?? ASCE.

Top