Sample records for continuous dependent variables

  1. A Primer on Logistic Regression.

    ERIC Educational Resources Information Center

    Woldbeck, Tanya

    This paper introduces logistic regression as a viable alternative when the researcher is faced with variables that are not continuous. If one is to use simple regression, the dependent variable must be measured on a continuous scale. In the behavioral sciences, it may not always be appropriate or possible to have a measured dependent variable on a…

  2. How Robust Is Linear Regression with Dummy Variables?

    ERIC Educational Resources Information Center

    Blankmeyer, Eric

    2006-01-01

    Researchers in education and the social sciences make extensive use of linear regression models in which the dependent variable is continuous-valued while the explanatory variables are a combination of continuous-valued regressors and dummy variables. The dummies partition the sample into groups, some of which may contain only a few observations.…

  3. Determining Directional Dependency in Causal Associations

    PubMed Central

    Pornprasertmanit, Sunthud; Little, Todd D.

    2014-01-01

    Directional dependency is a method to determine the likely causal direction of effect between two variables. This article aims to critique and improve upon the use of directional dependency as a technique to infer causal associations. We comment on several issues raised by von Eye and DeShon (2012), including: encouraging the use of the signs of skewness and excessive kurtosis of both variables, discouraging the use of D’Agostino’s K2, and encouraging the use of directional dependency to compare variables only within time points. We offer improved steps for determining directional dependency that fix the problems we note. Next, we discuss how to integrate directional dependency into longitudinal data analysis with two variables. We also examine the accuracy of directional dependency evaluations when several regression assumptions are violated. Directional dependency can suggest the direction of a relation if (a) the regression error in population is normal, (b) an unobserved explanatory variable correlates with any variables equal to or less than .2, (c) a curvilinear relation between both variables is not strong (standardized regression coefficient ≤ .2), (d) there are no bivariate outliers, and (e) both variables are continuous. PMID:24683282

  4. Predictive factors in patients with hepatocellular carcinoma receiving sorafenib therapy using time-dependent receiver operating characteristic analysis.

    PubMed

    Nishikawa, Hiroki; Nishijima, Norihiro; Enomoto, Hirayuki; Sakamoto, Azusa; Nasu, Akihiro; Komekado, Hideyuki; Nishimura, Takashi; Kita, Ryuichi; Kimura, Toru; Iijima, Hiroko; Nishiguchi, Shuhei; Osaki, Yukio

    2017-01-01

    To investigate variables before sorafenib therapy on the clinical outcomes in hepatocellular carcinoma (HCC) patients receiving sorafenib and to further assess and compare the predictive performance of continuous parameters using time-dependent receiver operating characteristics (ROC) analysis. A total of 225 HCC patients were analyzed. We retrospectively examined factors related to overall survival (OS) and progression free survival (PFS) using univariate and multivariate analyses. Subsequently, we performed time-dependent ROC analysis of continuous parameters which were significant in the multivariate analysis in terms of OS and PFS. Total sum of area under the ROC in all time points (defined as TAAT score) in each case was calculated. Our cohort included 175 male and 50 female patients (median age, 72 years) and included 158 Child-Pugh A and 67 Child-Pugh B patients. The median OS time was 0.68 years, while the median PFS time was 0.24 years. On multivariate analysis, gender, body mass index (BMI), Child-Pugh classification, extrahepatic metastases, tumor burden, aspartate aminotransferase (AST) and alpha-fetoprotein (AFP) were identified as significant predictors of OS and ECOG-performance status, Child-Pugh classification and extrahepatic metastases were identified as significant predictors of PFS. Among three continuous variables (i.e., BMI, AST and AFP), AFP had the highest TAAT score for the entire cohort. In subgroup analyses, AFP had the highest TAAT score except for Child-Pugh B and female among three continuous variables. In continuous variables, AFP could have higher predictive accuracy for survival in HCC patients undergoing sorafenib therapy.

  5. Dependence of vestibular reactions on frequency of action of sign-variable accelerations

    NASA Technical Reports Server (NTRS)

    Lapayev, E. V.; Vorobyev, O. A.; Ivanov, V. V.

    1980-01-01

    It was revealed that during the tests with continuous action of sign variable Coriolis acceleration the development of kinetosis was proportionate to the time of head inclinations in the range of 1 to 4 seconds while illusions of rocking in sagittal plane was more expressed in fast inclinations. The obtained data provided the evidence of sufficient dependence of vestibulovegetative and vestibulosensory reactions on the period of repetition of sign variable Coriolis acceleration.

  6. Current Directions in Mediation Analysis

    PubMed Central

    MacKinnon, David P.; Fairchild, Amanda J.

    2010-01-01

    Mediating variables continue to play an important role in psychological theory and research. A mediating variable transmits the effect of an antecedent variable on to a dependent variable, thereby providing more detailed understanding of relations among variables. Methods to assess mediation have been an active area of research for the last two decades. This paper describes the current state of methods to investigate mediating variables. PMID:20157637

  7. Learning Styles in Continuing Medical Education.

    ERIC Educational Resources Information Center

    Bennet, Nancy L.; Fox, Robert D.

    1984-01-01

    Synthesizes literature on cognitive style and considers issues about its role in continuing medical education research, including whether it should be used as a dependent or independent variable and how it may be used in causal models. (SK)

  8. Continuous variable quantum key distribution with modulated entangled states.

    PubMed

    Madsen, Lars S; Usenko, Vladyslav C; Lassen, Mikael; Filip, Radim; Andersen, Ulrik L

    2012-01-01

    Quantum key distribution enables two remote parties to grow a shared key, which they can use for unconditionally secure communication over a certain distance. The maximal distance depends on the loss and the excess noise of the connecting quantum channel. Several quantum key distribution schemes based on coherent states and continuous variable measurements are resilient to high loss in the channel, but are strongly affected by small amounts of channel excess noise. Here we propose and experimentally address a continuous variable quantum key distribution protocol that uses modulated fragile entangled states of light to greatly enhance the robustness to channel noise. We experimentally demonstrate that the resulting quantum key distribution protocol can tolerate more noise than the benchmark set by the ideal continuous variable coherent state protocol. Our scheme represents a very promising avenue for extending the distance for which secure communication is possible.

  9. SIZE DEPENDENT MODEL OF HAZARDOUS SUBSTANCES IN Q AQUATIC FOOD CHAIN

    EPA Science Inventory

    A model of toxic substance accumulation is constructed that introduces organism size as an additional independent variable. The model represents an ecological continuum through size dependency; classical compartment analyses are therefore a special case of the continuous model. S...

  10. Multiplicative Forests for Continuous-Time Processes

    PubMed Central

    Weiss, Jeremy C.; Natarajan, Sriraam; Page, David

    2013-01-01

    Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability. PMID:25284967

  11. Multiplicative Forests for Continuous-Time Processes.

    PubMed

    Weiss, Jeremy C; Natarajan, Sriraam; Page, David

    2012-01-01

    Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability.

  12. A Two-Step Approach to Analyze Satisfaction Data

    ERIC Educational Resources Information Center

    Ferrari, Pier Alda; Pagani, Laura; Fiorio, Carlo V.

    2011-01-01

    In this paper a two-step procedure based on Nonlinear Principal Component Analysis (NLPCA) and Multilevel models (MLM) for the analysis of satisfaction data is proposed. The basic hypothesis is that observed ordinal variables describe different aspects of a latent continuous variable, which depends on covariates connected with individual and…

  13. On Correlations, Distances and Error Rates.

    ERIC Educational Resources Information Center

    Dorans, Neil J.

    The nature of the criterion (dependent) variable may play a useful role in structuring a list of classification/prediction problems. Such criteria are continuous in nature, binary dichotomous, or multichotomous. In this paper, discussion is limited to the continuous normally distributed criterion scenarios. For both cases, it is assumed that the…

  14. Learning dependence from samples.

    PubMed

    Seth, Sohan; Príncipe, José C

    2014-01-01

    Mutual information, conditional mutual information and interaction information have been widely used in scientific literature as measures of dependence, conditional dependence and mutual dependence. However, these concepts suffer from several computational issues; they are difficult to estimate in continuous domain, the existing regularised estimators are almost always defined only for real or vector-valued random variables, and these measures address what dependence, conditional dependence and mutual dependence imply in terms of the random variables but not finite realisations. In this paper, we address the issue that given a set of realisations in an arbitrary metric space, what characteristic makes them dependent, conditionally dependent or mutually dependent. With this novel understanding, we develop new estimators of association, conditional association and interaction association. Some attractive properties of these estimators are that they do not require choosing free parameter(s), they are computationally simpler, and they can be applied to arbitrary metric spaces.

  15. An IRT Model with a Parameter-Driven Process for Change

    ERIC Educational Resources Information Center

    Rijmen, Frank; De Boeck, Paul; van der Maas, Han L. J.

    2005-01-01

    An IRT model with a parameter-driven process for change is proposed. Quantitative differences between persons are taken into account by a continuous latent variable, as in common IRT models. In addition, qualitative inter-individual differences and auto-dependencies are accounted for by assuming within-subject variability with respect to the…

  16. Mixed and Mixture Regression Models for Continuous Bounded Responses Using the Beta Distribution

    ERIC Educational Resources Information Center

    Verkuilen, Jay; Smithson, Michael

    2012-01-01

    Doubly bounded continuous data are common in the social and behavioral sciences. Examples include judged probabilities, confidence ratings, derived proportions such as percent time on task, and bounded scale scores. Dependent variables of this kind are often difficult to analyze using normal theory models because their distributions may be quite…

  17. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable.

    PubMed

    Austin, Peter C; Steyerberg, Ewout W

    2012-06-20

    When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.

  18. Gate sequence for continuous variable one-way quantum computation

    PubMed Central

    Su, Xiaolong; Hao, Shuhong; Deng, Xiaowei; Ma, Lingyu; Wang, Meihong; Jia, Xiaojun; Xie, Changde; Peng, Kunchi

    2013-01-01

    Measurement-based one-way quantum computation using cluster states as resources provides an efficient model to perform computation and information processing of quantum codes. Arbitrary Gaussian quantum computation can be implemented sufficiently by long single-mode and two-mode gate sequences. However, continuous variable gate sequences have not been realized so far due to an absence of cluster states larger than four submodes. Here we present the first continuous variable gate sequence consisting of a single-mode squeezing gate and a two-mode controlled-phase gate based on a six-mode cluster state. The quantum property of this gate sequence is confirmed by the fidelities and the quantum entanglement of two output modes, which depend on both the squeezing and controlled-phase gates. The experiment demonstrates the feasibility of implementing Gaussian quantum computation by means of accessible gate sequences.

  19. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  20. Multi-scale evaluation of the environmental controls on burn probability in a southern Sierra Nevada landscape

    Treesearch

    Sean A. Parks; Marc-Andre Parisien; Carol Miller

    2011-01-01

    We examined the scale-dependent relationship between spatial fire likelihood or burn probability (BP) and some key environmental controls in the southern Sierra Nevada, California, USA. Continuous BP estimates were generated using a fire simulation model. The correspondence between BP (dependent variable) and elevation, ignition density, fuels and aspect was evaluated...

  1. What Information Is Necessary for Speech Categorization? Harnessing Variability in the Speech Signal by Integrating Cues Computed Relative to Expectations

    ERIC Educational Resources Information Center

    McMurray, Bob; Jongman, Allard

    2011-01-01

    Most theories of categorization emphasize how continuous perceptual information is mapped to categories. However, equally important are the informational assumptions of a model, the type of information subserving this mapping. This is crucial in speech perception where the signal is variable and context dependent. This study assessed the…

  2. Continuous-variable quantum teleportation with non-Gaussian resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dell'Anno, F.; Dipartimento di Fisica, Universita degli Studi di Salerno, Via S. Allende, I-84081 Baronissi; CNR-INFM Coherentia, Napoli, Italy and CNISM Unita di Salerno and INFN Sezione di Napoli, Gruppo Collegato di Salerno, Baronissi

    2007-08-15

    We investigate continuous variable quantum teleportation using non-Gaussian states of the radiation field as entangled resources. We compare the performance of different classes of degaussified resources, including two-mode photon-added and two-mode photon-subtracted squeezed states. We then introduce a class of two-mode squeezed Bell-like states with one-parameter dependence for optimization. These states interpolate between and include as subcases different classes of degaussified resources. We show that optimized squeezed Bell-like resources yield a remarkable improvement in the fidelity of teleportation both for coherent and nonclassical input states. The investigation reveals that the optimal non-Gaussian resources for continuous variable teleportation are those thatmore » most closely realize the simultaneous maximization of the content of entanglement, the degree of affinity with the two-mode squeezed vacuum, and the, suitably measured, amount of non-Gaussianity.« less

  3. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  4. Correlation and agreement: overview and clarification of competing concepts and measures.

    PubMed

    Liu, Jinyuan; Tang, Wan; Chen, Guanqin; Lu, Yin; Feng, Changyong; Tu, Xin M

    2016-04-25

    Agreement and correlation are widely-used concepts that assess the association between variables. Although similar and related, they represent completely different notions of association. Assessing agreement between variables assumes that the variables measure the same construct, while correlation of variables can be assessed for variables that measure completely different constructs. This conceptual difference requires the use of different statistical methods, and when assessing agreement or correlation, the statistical method may vary depending on the distribution of the data and the interest of the investigator. For example, the Pearson correlation, a popular measure of correlation between continuous variables, is only informative when applied to variables that have linear relationships; it may be non-informative or even misleading when applied to variables that are not linearly related. Likewise, the intraclass correlation, a popular measure of agreement between continuous variables, may not provide sufficient information for investigators if the nature of poor agreement is of interest. This report reviews the concepts of agreement and correlation and discusses differences in the application of several commonly used measures.

  5. GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.

    PubMed

    Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N

    2018-01-01

    Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.

  6. MIMICKING COUNTERFACTUAL OUTCOMES TO ESTIMATE CAUSAL EFFECTS.

    PubMed

    Lok, Judith J

    2017-04-01

    In observational studies, treatment may be adapted to covariates at several times without a fixed protocol, in continuous time. Treatment influences covariates, which influence treatment, which influences covariates, and so on. Then even time-dependent Cox-models cannot be used to estimate the net treatment effect. Structural nested models have been applied in this setting. Structural nested models are based on counterfactuals: the outcome a person would have had had treatment been withheld after a certain time. Previous work on continuous-time structural nested models assumes that counterfactuals depend deterministically on observed data, while conjecturing that this assumption can be relaxed. This article proves that one can mimic counterfactuals by constructing random variables, solutions to a differential equation, that have the same distribution as the counterfactuals, even given past observed data. These "mimicking" variables can be used to estimate the parameters of structural nested models without assuming the treatment effect to be deterministic.

  7. Preservice and Inservice Education: A Case for Teacher Aides. Teacher Education Forum; Volume 3, Number 3.

    ERIC Educational Resources Information Center

    Calvin, Richmond E.

    The continued existence and value of teacher aides in school districts throughout America is dependent on the successful manipulation of a number of variables. These are quite divergent and vary from school district to school district and often within individual schools. The successful operation of each teacher aide program depends on the…

  8. Benford's law and continuous dependent random variables

    NASA Astrophysics Data System (ADS)

    Becker, Thealexa; Burt, David; Corcoran, Taylor C.; Greaves-Tunnell, Alec; Iafrate, Joseph R.; Jing, Joy; Miller, Steven J.; Porfilio, Jaclyn D.; Ronan, Ryan; Samranvedhya, Jirapat; Strauch, Frederick W.; Talbut, Blaine

    2018-01-01

    Many mathematical, man-made and natural systems exhibit a leading-digit bias, where a first digit (base 10) of 1 occurs not 11% of the time, as one would expect if all digits were equally likely, but rather 30%. This phenomenon is known as Benford's Law. Analyzing which datasets adhere to Benford's Law and how quickly Benford behavior sets in are the two most important problems in the field. Most previous work studied systems of independent random variables, and relied on the independence in their analyses. Inspired by natural processes such as particle decay, we study the dependent random variables that emerge from models of decomposition of conserved quantities. We prove that in many instances the distribution of lengths of the resulting pieces converges to Benford behavior as the number of divisions grow, and give several conjectures for other fragmentation processes. The main difficulty is that the resulting random variables are dependent. We handle this by using tools from Fourier analysis and irrationality exponents to obtain quantified convergence rates as well as introducing and developing techniques to measure and control the dependencies. The construction of these tools is one of the major motivations of this work, as our approach can be applied to many other dependent systems. As an example, we show that the n ! entries in the determinant expansions of n × n matrices with entries independently drawn from nice random variables converges to Benford's Law.

  9. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable

    PubMed Central

    2012-01-01

    Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998

  10. Robust shot-noise measurement for continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Kunz-Jacques, Sébastien; Jouguet, Paul

    2015-02-01

    We study a practical method to measure the shot noise in real time in continuous-variable quantum key distribution systems. The amount of secret key that can be extracted from the raw statistics depends strongly on this quantity since it affects in particular the computation of the excess noise (i.e., noise in excess of the shot noise) added by an eavesdropper on the quantum channel. Some powerful quantum hacking attacks relying on faking the estimated value of the shot noise to hide an intercept and resend strategy were proposed. Here, we provide experimental evidence that our method can defeat the saturation attack and the wavelength attack.

  11. Mediating the distal crime-drug relationship with proximal reactive criminal thinking.

    PubMed

    Walters, Glenn D

    2016-02-01

    This article describes the results of a study designed to test whether reactive criminal thinking (RCT) does a better job of mediating the crime → drug relationship than it does mediating the drug → crime relationship after the direct effects of crime on drug use/dependency and of drug use/dependency on crime have been rendered nonsignificant by control variables. All 1,170 male members of the Pathways to Desistance study (Mulvey, 2012) served as participants in the current investigation. As predicted, the total (unmediated) effects of crime on substance use/dependence and of substance use/dependence on crime were nonsignificant when key demographic and third variables were controlled, although the indirect (RCT-mediated) effect of crime on drug use was significant. Proactive criminal thinking (PCT), by comparison, failed to mediate either relationship. The RCT continued to mediate the crime → drug relationship and the PCT continued to not mediate either relationship when more specific forms of offending (aggressive, income) and substance use/dependence (drug use, substance-use dependency symptoms) were analyzed. This offers preliminary support for the notion that even when the total crime-drug effect is nonsignificant the indirect path from crime to reactive criminal thinking to drugs can still be significant. Based on these results, it is concluded that mediation by proximal reactive criminal thinking is a mechanism by which distal measures of crime and drug use/dependence are connected. (c) 2016 APA, all rights reserved).

  12. Operator- and software-related post-experimental variability and source of error in 2-DE analysis.

    PubMed

    Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo

    2012-05-01

    In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.

  13. Multinomial logistic regression in workers' health

    NASA Astrophysics Data System (ADS)

    Grilo, Luís M.; Grilo, Helena L.; Gonçalves, Sónia P.; Junça, Ana

    2017-11-01

    In European countries, namely in Portugal, it is common to hear some people mentioning that they are exposed to excessive and continuous psychosocial stressors at work. This is increasing in diverse activity sectors, such as, the Services sector. A representative sample was collected from a Portuguese Services' organization, by applying a survey (internationally validated), which variables were measured in five ordered categories in Likert-type scale. A multinomial logistic regression model is used to estimate the probability of each category of the dependent variable general health perception where, among other independent variables, burnout appear as statistically significant.

  14. Motivation as an independent and a dependent variable in medical education: a review of the literature.

    PubMed

    Kusurkar, R A; Ten Cate, Th J; van Asperen, M; Croiset, G

    2011-01-01

    Motivation in learning behaviour and education is well-researched in general education, but less in medical education. To answer two research questions, 'How has the literature studied motivation as either an independent or dependent variable? How is motivation useful in predicting and understanding processes and outcomes in medical education?' in the light of the Self-determination Theory (SDT) of motivation. A literature search performed using the PubMed, PsycINFO and ERIC databases resulted in 460 articles. The inclusion criteria were empirical research, specific measurement of motivation and qualitative research studies which had well-designed methodology. Only studies related to medical students/school were included. Findings of 56 articles were included in the review. Motivation as an independent variable appears to affect learning and study behaviour, academic performance, choice of medicine and specialty within medicine and intention to continue medical study. Motivation as a dependent variable appears to be affected by age, gender, ethnicity, socioeconomic status, personality, year of medical curriculum and teacher and peer support, all of which cannot be manipulated by medical educators. Motivation is also affected by factors that can be influenced, among which are, autonomy, competence and relatedness, which have been described as the basic psychological needs important for intrinsic motivation according to SDT. Motivation is an independent variable in medical education influencing important outcomes and is also a dependent variable influenced by autonomy, competence and relatedness. This review finds some evidence in support of the validity of SDT in medical education.

  15. Hybrid Discrete-Continuous Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich

    2003-01-01

    This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.

  16. Old and New Ideas for Data Screening and Assumption Testing for Exploratory and Confirmatory Factor Analysis

    PubMed Central

    Flora, David B.; LaBrish, Cathy; Chalmers, R. Philip

    2011-01-01

    We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables. PMID:22403561

  17. Human activities and climate variability drive fast-paced change across the world's estuarine-coastal ecosystems

    USGS Publications Warehouse

    Cloern, James E.; Abreu, Paulo C.; Carstensen, Jacob; Chauvaud, Laurent; Elmgren, Ragnar; Grall, Jacques; Greening, Holly; Johansson, John O.R.; Kahru, Mati; Sherwood, Edward T.; Xu, Jie; Yin, Kedong

    2016-01-01

    Time series of environmental measurements are essential for detecting, measuring and understanding changes in the Earth system and its biological communities. Observational series have accumulated over the past 2–5 decades from measurements across the world's estuaries, bays, lagoons, inland seas and shelf waters influenced by runoff. We synthesize information contained in these time series to develop a global view of changes occurring in marine systems influenced by connectivity to land. Our review is organized around four themes: (i) human activities as drivers of change; (ii) variability of the climate system as a driver of change; (iii) successes, disappointments and challenges of managing change at the sea-land interface; and (iv) discoveries made from observations over time. Multidecadal time series reveal that many of the world's estuarine–coastal ecosystems are in a continuing state of change, and the pace of change is faster than we could have imagined a decade ago. Some have been transformed into novel ecosystems with habitats, biogeochemistry and biological communities outside the natural range of variability. Change takes many forms including linear and nonlinear trends, abrupt state changes and oscillations. The challenge of managing change is daunting in the coastal zone where diverse human pressures are concentrated and intersect with different responses to climate variability over land and over ocean basins. The pace of change in estuarine–coastal ecosystems will likely accelerate as the human population and economies continue to grow and as global climate change accelerates. Wise stewardship of the resources upon which we depend is critically dependent upon a continuing flow of information from observations to measure, understand and anticipate future changes along the world's coastlines.

  18. The use of auxiliary variables in capture-recapture and removal experiments

    USGS Publications Warehouse

    Pollock, K.H.; Hines, J.E.; Nichols, J.D.

    1984-01-01

    The dependence of animal capture probabilities on auxiliary variables is an important practical problem which has not been considered in the development of estimation procedures for capture-recapture and removal experiments. In this paper the linear logistic binary regression model is used to relate the probability of capture to continuous auxiliary variables. The auxiliary variables could be environmental quantities such as air or water temperature, or characteristics of individual animals, such as body length or weight. Maximum likelihood estimators of the population parameters are considered for a variety of models which all assume a closed population. Testing between models is also considered. The models can also be used when one auxiliary variable is a measure of the effort expended in obtaining the sample.

  19. Intervention Adherence for Research and Practice: Necessity or Triage Outcome?

    ERIC Educational Resources Information Center

    Barnett, David; Hawkins, Renee; Lentz, F. Edward, Jr.

    2011-01-01

    Intervention integrity or adherence describes qualities of carrying out an intervention plan and in research is fundamentally linked to experimental validity questions addressed by measurement of independent and dependent variables. Integrity has been well described in conceptual writing but has been a continuing thorny subject in research and…

  20. Identifying the Factors That Influence Change in SEBD Using Logistic Regression Analysis

    ERIC Educational Resources Information Center

    Camilleri, Liberato; Cefai, Carmel

    2013-01-01

    Multiple linear regression and ANOVA models are widely used in applications since they provide effective statistical tools for assessing the relationship between a continuous dependent variable and several predictors. However these models rely heavily on linearity and normality assumptions and they do not accommodate categorical dependent…

  1. Modeling continuous covariates with a "spike" at zero: Bivariate approaches.

    PubMed

    Jenkner, Carolin; Lorenz, Eva; Becher, Heiko; Sauerbrei, Willi

    2016-07-01

    In epidemiology and clinical research, predictors often take value zero for a large amount of observations while the distribution of the remaining observations is continuous. These predictors are called variables with a spike at zero. Examples include smoking or alcohol consumption. Recently, an extension of the fractional polynomial (FP) procedure, a technique for modeling nonlinear relationships, was proposed to deal with such situations. To indicate whether or not a value is zero, a binary variable is added to the model. In a two stage procedure, called FP-spike, the necessity of the binary variable and/or the continuous FP function for the positive part are assessed for a suitable fit. In univariate analyses, the FP-spike procedure usually leads to functional relationships that are easy to interpret. This paper introduces four approaches for dealing with two variables with a spike at zero (SAZ). The methods depend on the bivariate distribution of zero and nonzero values. Bi-Sep is the simplest of the four bivariate approaches. It uses the univariate FP-spike procedure separately for the two SAZ variables. In Bi-D3, Bi-D1, and Bi-Sub, proportions of zeros in both variables are considered simultaneously in the binary indicators. Therefore, these strategies can account for correlated variables. The methods can be used for arbitrary distributions of the covariates. For illustration and comparison of results, data from a case-control study on laryngeal cancer, with smoking and alcohol intake as two SAZ variables, is considered. In addition, a possible extension to three or more SAZ variables is outlined. A combination of log-linear models for the analysis of the correlation in combination with the bivariate approaches is proposed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Infilling and quality checking of discharge, precipitation and temperature data using a copula based approach

    NASA Astrophysics Data System (ADS)

    Anwar, Faizan; Bárdossy, András; Seidel, Jochen

    2017-04-01

    Estimating missing values in a time series of a hydrological variable is an everyday task for a hydrologist. Existing methods such as inverse distance weighting, multivariate regression, and kriging, though simple to apply, provide no indication of the quality of the estimated value and depend mainly on the values of neighboring stations at a given step in the time series. Copulas have the advantage of representing the pure dependence structure between two or more variables (given the relationship between them is monotonic). They rid us of questions such as transforming the data before use or calculating functions that model the relationship between the considered variables. A copula-based approach is suggested to infill discharge, precipitation, and temperature data. As a first step the normal copula is used, subsequently, the necessity to use non-normal / non-symmetrical dependence is investigated. Discharge and temperature are treated as regular continuous variables and can be used without processing for infilling and quality checking. Due to the mixed distribution of precipitation values, it has to be treated differently. This is done by assigning a discrete probability to the zeros and treating the rest as a continuous distribution. Building on the work of others, along with infilling, the normal copula is also utilized to identify values in a time series that might be erroneous. This is done by treating the available value as missing, infilling it using the normal copula and checking if it lies within a confidence band (5 to 95% in our case) of the obtained conditional distribution. Hydrological data from two catchments Upper Neckar River (Germany) and Santa River (Peru) are used to demonstrate the application for datasets with different data quality. The Python code used here is also made available on GitHub. The required input is the time series of a given variable at different stations.

  3. Living in a continuous traumatic reality: Impact on elderly persons residing in urban and rural communities.

    PubMed

    Regev, Irit; Nuttman-Shwartz, Orit

    2016-01-01

    This study is an exploration of the contribution of exposure to the continuous threat of Qassam rocket attacks to PTSD among elderly residents of urban and rural communities. Specifically, we examined the contribution of sociodemographic variables, psychological resources, and perceived social support to PTSD, and whether this relationship is mediated by cognitive appraisals. The sample consisted of 298 residents of 2 different communities: urban (n = 190), and rural (n = 108). We examined the main research question by calculating the correlations of the sociodemographic variables, the psychological resource (self-esteem), social support, and cognitive appraisals with the dependent variable (PTSD). Our model explained the variance in PTSD (53% for urban residents, and 56% for rural residents). Higher levels of PTSD were found among the urban residents. Most of the predictors contributed to PTSD, but differences were found between each type of community with regard to the combination of components. Results indicated that the type of community is related degree of protection against stress-related triggers such as Qassam rockets. The psychological resource (self-esteem) and cognitive appraisal variables were found to be important for older people facing a continuous threat, and can serve as a basis for professional intervention. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Continuous-Time Random Walk with multi-step memory: an application to market dynamics

    NASA Astrophysics Data System (ADS)

    Gubiec, Tomasz; Kutner, Ryszard

    2017-11-01

    An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  5. Nonlinear Dynamics in Viscoelastic Jets

    NASA Astrophysics Data System (ADS)

    Majmudar, Trushant; Varagnat, Matthieu; McKinley, Gareth

    2008-11-01

    Instabilities in free surface continuous jets of non-Newtonian fluids, although relevant for many industrial processes, remain poorly understood in terms of fundamental fluid dynamics. Inviscid, and viscous Newtonian jets have been studied in considerable detail, both theoretically and experimentally. Instability in viscous jets leads to regular periodic coiling of the jet, which exhibits a non-trivial frequency dependence with the height of the fall. Here we present a systematic study of the effect of viscoelasticity on the dynamics of continuous jets of worm-like micellar surfactant solutions of varying viscosities and elasticities. We observe complex nonlinear spatio-temporal dynamics of the jet, and uncover a transition from periodic to quasi-periodic to a multi-frequency, broad-spectrum dynamics. Beyond this regime, the jet dynamics smoothly crosses over to exhibit the ``leaping shampoo'' or the Kaye effect. We examine different dynamical regimes in terms of scaling variables, which depend on the geometry (dimensionless height), kinematics (dimensionless flow rate), and the fluid properties (elasto-gravity number) and present a regime map of the dynamics of the jet in terms of these dimensionless variables.

  6. Nonlinear Dynamics in Viscoelastic Jets

    NASA Astrophysics Data System (ADS)

    Majmudar, Trushant; Varagnat, Matthieu; McKinley, Gareth

    2009-03-01

    Instabilities in free surface continuous jets of non-Newtonian fluids, although relevant for many industrial processes, remain poorly understood in terms of fundamental fluid dynamics. Inviscid, and viscous Newtonian jets have been studied in considerable detail, both theoretically and experimentally. Instability in viscous jets leads to regular periodic coiling of the jet, which exhibits a non-trivial frequency dependence with the height of the fall. Here we present a systematic study of the effect of viscoelasticity on the dynamics of continuous jets of worm-like micellar surfactant solutions of varying viscosities and elasticities. We observe complex nonlinear spatio-temporal dynamics of the jet, and uncover a transition from periodic to quasi-periodic to a multi-frequency, broad-spectrum dynamics. Beyond this regime, the jet dynamics smoothly crosses over to exhibit the ``leaping shampoo'' or the Kaye effect. We examine different dynamical regimes in terms of scaling variables, which depend on the geometry (dimensionless height), kinematics (dimensionless flow rate), and the fluid properties (elasto-gravity number) and present a regime map of the dynamics of the jet in terms of these dimensionless variables.

  7. VR/LE engine with a variable R/L during a single cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rychter, T.J.; Teodorczyk, A.

    1985-01-01

    A new concept of an engine, called a Variable R/L Engine (VR/LE) is presented. The main feature of the engine is the continuous change of the crank-radius to connecting-rod-length ratio (R/L) during the single engine cycle. The variations of the phase angle result in changes of all the engine stroke lengths and also-they are causing the changes of the thermodynamic cycle of the engine. Therefore the phase angle variations make it possible to regulate continuously the compression ratio and the displacement volume of the engine within the range which depends on the engine mechanism geometry. The presented concept can bemore » applied to all the types of the IC piston engines, independently of their size and operation principle.« less

  8. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.

  9. Pedagogical introduction to the entropy of entanglement for Gaussian states

    NASA Astrophysics Data System (ADS)

    Demarie, Tommaso F.

    2018-05-01

    In quantum information theory, the entropy of entanglement is a standard measure of bipartite entanglement between two partitions of a composite system. For a particular class of continuous variable quantum states, the Gaussian states, the entropy of entanglement can be expressed elegantly in terms of symplectic eigenvalues, elements that characterise a Gaussian state and depend on the correlations of the canonical variables. We give a rigorous step-by-step derivation of this result and provide physical insights, together with an example that can be useful in practice for calculations.

  10. Capacitance variation measurement method with a continuously variable measuring range for a micro-capacitance sensor

    NASA Astrophysics Data System (ADS)

    Lü, Xiaozhou; Xie, Kai; Xue, Dongfeng; Zhang, Feng; Qi, Liang; Tao, Yebo; Li, Teng; Bao, Weimin; Wang, Songlin; Li, Xiaoping; Chen, Renjie

    2017-10-01

    Micro-capacitance sensors are widely applied in industrial applications for the measurement of mechanical variations. The measurement accuracy of micro-capacitance sensors is highly dependent on the capacitance measurement circuit. To overcome the inability of commonly used methods to directly measure capacitance variation and deal with the conflict between the measurement range and accuracy, this paper presents a capacitance variation measurement method which is able to measure the output capacitance variation (relative value) of the micro-capacitance sensor with a continuously variable measuring range. We present the principles and analyze the non-ideal factors affecting this method. To implement the method, we developed a capacitance variation measurement circuit and carried out experiments to test the circuit. The result shows that the circuit is able to measure a capacitance variation range of 0-700 pF linearly with a maximum relative accuracy of 0.05% and a capacitance range of 0-2 nF (with a baseline capacitance of 1 nF) with a constant resolution of 0.03%. The circuit is proposed as a new method to measure capacitance and is expected to have applications in micro-capacitance sensors for measuring capacitance variation with a continuously variable measuring range.

  11. Using the entire history in the analysis of nested case cohort samples.

    PubMed

    Rivera, C L; Lumley, T

    2016-08-15

    Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Multi-Objective Optimization of a Turbofan for an Advanced, Single-Aisle Transport

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.; Guynn, Mark D.

    2012-01-01

    Considerable interest surrounds the design of the next generation of single-aisle commercial transports in the Boeing 737 and Airbus A320 class. Aircraft designers will depend on advanced, next-generation turbofan engines to power these airplanes. The focus of this study is to apply single- and multi-objective optimization algorithms to the conceptual design of ultrahigh bypass turbofan engines for this class of aircraft, using NASA s Subsonic Fixed Wing Project metrics as multidisciplinary objectives for optimization. The independent design variables investigated include three continuous variables: sea level static thrust, wing reference area, and aerodynamic design point fan pressure ratio, and four discrete variables: overall pressure ratio, fan drive system architecture (i.e., direct- or gear-driven), bypass nozzle architecture (i.e., fixed- or variable geometry), and the high- and low-pressure compressor work split. Ramp weight, fuel burn, noise, and emissions are the parameters treated as dependent objective functions. These optimized solutions provide insight to the ultrahigh bypass engine design process and provide information to NASA program management to help guide its technology development efforts.

  13. An open-source software package for multivariate modeling and clustering: applications to air quality management.

    PubMed

    Wang, Xiuquan; Huang, Guohe; Zhao, Shan; Guo, Junhong

    2015-09-01

    This paper presents an open-source software package, rSCA, which is developed based upon a stepwise cluster analysis method and serves as a statistical tool for modeling the relationships between multiple dependent and independent variables. The rSCA package is efficient in dealing with both continuous and discrete variables, as well as nonlinear relationships between the variables. It divides the sample sets of dependent variables into different subsets (or subclusters) through a series of cutting and merging operations based upon the theory of multivariate analysis of variance (MANOVA). The modeling results are given by a cluster tree, which includes both intermediate and leaf subclusters as well as the flow paths from the root of the tree to each leaf subcluster specified by a series of cutting and merging actions. The rSCA package is a handy and easy-to-use tool and is freely available at http://cran.r-project.org/package=rSCA . By applying the developed package to air quality management in an urban environment, we demonstrate its effectiveness in dealing with the complicated relationships among multiple variables in real-world problems.

  14. Language Shift in the United States and Foreign-Born Older Mexican Heritage Individuals: Co-ethnic Context for Language Resistance

    PubMed Central

    Siordia, Carlos; Díaz, María E.

    2014-01-01

    In this study, we investigate individual-level language shift in a population of Mexican origin Latinos/as aged 65 and up. By using data from the Hispanic Established Populations for the Epidemiologic Study of the Elderly, we investigate their English language use as the dependent variable in a hierarchical linear model. The microlevel independent continuous variable is their level of contact with “Anglos”; the macrolevel continuous independent variable is the percentage of Mexicans in tract of residence. After accounting for their generational status, other microlevel social and health covariates, and tract-level attributes, we found a direct relationship between contact with Anglos and a “shift” toward more English language use, where as co-ethnic concentration increases, the influence of contact with Anglos decreases. We frame this article with a discussion on language shifting, and explain how co-ethnic concentration may provide the resources for engaging in a language resistance. PMID:25104874

  15. The variability of the rainfall rate as a function of area

    NASA Astrophysics Data System (ADS)

    Jameson, A. R.; Larsen, M. L.

    2016-01-01

    Distributions of drop sizes can be expressed as DSD = Nt × PSD, where Nt is the total number of drops in a sample and PSD is the frequency distribution of drop diameters (D). Their discovery permitted remote sensing techniques for rainfall estimation using radars and satellites measuring over large domains of several kilometers. Because these techniques depend heavily on higher moments of the PSD, there has been a bias toward attributing the variability of the intrinsic rainfall rates R over areas (σR) to the variability of the PSDs. While this variability does increase up to a point with increasing domain dimension L, the variability of the rainfall rate R also depends upon the variability in the total number of drops Nt. We show that while the importance of PSDs looms large for small domains used in past studies, it is the variability of Nt that dominates the variability of R as L increases to 1 km and beyond. The PSDs contribute to the variability of R through the relative dispersion of χ = D3Vt, where Vt is the terminal fall speed of drops of diameter D. However, the variability of χ is inherently limited because drop sizes and fall speeds are physically limited. In contrast, it is shown that the variance of Nt continuously increases as the domain expands for physical reasons explained below. Over domains larger than around 1 km, it is shown that Nt dominates the variance of the rainfall rate with increasing L regardless of the PSD.

  16. Reducing the White-Nonwhite Achievement Gap.

    ERIC Educational Resources Information Center

    Ramey, Madelaine

    It is well documented that there continues to be a gap between white and nonwhite student achievement. A study develops and tests a measure of white-nonwhite achievement gap reduction. The ultimate purpose is to use the measure as the dependent variable in a qualitative study of what works in reducing the gap. The strategy used in addressing this…

  17. Have the temperature time series a structural change after 1998?

    NASA Astrophysics Data System (ADS)

    Werner, Rolf; Valev, Dimitare; Danov, Dimitar

    2012-07-01

    The global and hemisphere temperature GISS and Hadcrut3 time series were analysed for structural changes. We postulate the continuity of the preceding temperature function depending from the time. The slopes are calculated for a sequence of segments limited by time thresholds. We used a standard method, the restricted linear regression with dummy variables. We performed the calculations and tests for different number of thresholds. The thresholds are searched continuously in determined time intervals. The F-statistic is used to obtain the time points of the structural changes.

  18. Shuttle Debris Impact Tool Assessment Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.; Rayos, E. M.; Campbell, C. H.; Rickman, S. L.

    2006-01-01

    Computational tools have been developed to estimate thermal and mechanical reentry loads experienced by the Space Shuttle Orbiter as the result of cavities in the Thermal Protection System (TPS). Such cavities can be caused by impact from ice or insulating foam debris shed from the External Tank (ET) on liftoff. The reentry loads depend on cavity geometry and certain Shuttle state variables, among other factors. Certain simplifying assumptions have been made in the tool development about the cavity geometry variables. For example, the cavities are all modeled as shoeboxes , with rectangular cross-sections and planar walls. So an actual cavity is typically approximated with an idealized cavity described in terms of its length, width, and depth, as well as its entry angle, exit angle, and side angles (assumed to be the same for both sides). As part of a comprehensive assessment of the uncertainty in reentry loads estimated by the debris impact assessment tools, an effort has been initiated to quantify the component of the uncertainty that is due to imperfect geometry specifications for the debris impact cavities. The approach is to compute predicted loads for a set of geometry factor combinations sufficient to develop polynomial approximations to the complex, nonparametric underlying computational models. Such polynomial models are continuous and feature estimable, continuous derivatives, conditions that facilitate the propagation of independent variable errors. As an additional benefit, once the polynomial models have been developed, they require fewer computational resources to execute than the underlying finite element and computational fluid dynamics codes, and can generate reentry loads estimates in significantly less time. This provides a practical screening capability, in which a large number of debris impact cavities can be quickly classified either as harmless, or subject to additional analysis with the more comprehensive underlying computational tools. The polynomial models also provide useful insights into the sensitivity of reentry loads to various cavity geometry variables, and reveal complex interactions among those variables that indicate how the sensitivity of one variable depends on the level of one or more other variables. For example, the effect of cavity length on certain reentry loads depends on the depth of the cavity. Such interactions are clearly displayed in the polynomial response models.

  19. An involuntary stereotypical grasp tendency pervades voluntary dynamic multifinger manipulation

    PubMed Central

    Rácz, Kornelius; Brown, Daniel

    2012-01-01

    We used a novel apparatus with three hinged finger pads to characterize collaborative multifinger interactions during dynamic manipulation requiring individuated control of fingertip motions and forces. Subjects placed the thumb, index, and middle fingertips on each hinged finger pad and held it—unsupported—with constant total grasp force while voluntarily oscillating the thumb's pad. This task combines the need to 1) hold the object against gravity while 2) dynamically reconfiguring the grasp. Fingertip force variability in this combined motion and force task exhibited strong synchrony among normal (i.e., grasp) forces. Mechanical analysis and simulation show that such synchronous variability is unnecessary and cannot be explained solely by signal-dependent noise. Surprisingly, such variability also pervaded control tasks requiring different individuated fingertip motions and forces, but not tasks without finger individuation such as static grasp. These results critically extend notions of finger force variability by exposing and quantifying a pervasive challenge to dynamic multifinger manipulation: the need for the neural controller to carefully and continuously overlay individuated finger actions over mechanically unnecessary synchronous interactions. This is compatible with—and may explain—the phenomenology of strong coupling of hand muscles when this delicate balance is not yet developed, as in early childhood, or when disrupted, as in brain injury. We conclude that the control of healthy multifinger dynamic manipulation has barely enough neuromechanical degrees of freedom to meet the multiple demands of ecological tasks and critically depends on the continuous inhibition of synchronous grasp tendencies, which we speculate may be of vestigial evolutionary origin. PMID:22956798

  20. Efficacy of plasma-rich growth factor in the healing of postextraction sockets in patients affected by insulin-dependent diabetes mellitus.

    PubMed

    Mozzati, Marco; Gallesio, Giorgia; di Romana, Sara; Bergamasco, Laura; Pol, Renato

    2014-03-01

    To evaluate the efficacy of plasma-rich growth factor (PRGF) in improving socket healing after tooth extraction in diabetic patients. This was a split-mouth study in which each patient also served as the control: the study socket was treated with PRGF, whereas the control socket underwent natural healing. The outcome variables were the Healing Index, residual socket volume, visual analog scale score, postsurgical complications, and outcome of a patient questionnaire. The investigation considered the impact of hyperglycemia, glycated hemoglobin, End Organ Disease Score, and smoking habits. Follow-up included 4 postextraction checkups over a 21-day period. Pairs of correlated continuous variables were analyzed with the Wilcoxon test, independent continuous variables with the Mann-Whitney test, and categorical variables with the χ(2) test or Fisher test. From January 2012 to December 2012, 34 patients affected by insulin-dependent diabetes mellitus underwent contemporary bilateral extractions of homologous teeth. The treatment-versus-control postoperative comparison showed that PRGF resulted in significantly smaller residual socket volumes and better Healing Indices from days 3 to 14. The patients' questionnaire outcomes were unanimously in favor of PRGF treatment. The small sample of patients with glycemia values of at least 240 mg/dL showed worse Healing Index and minor socket decreases. PRGF application after extraction improved the healing process in diabetic patients by accelerating socket closure (epithelialization) and tissue maturation, proving the association between PRGF use and improved wound healing in diabetic patients. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  1. Does raising type 1 error rate improve power to detect interactions in linear regression models? A simulation study.

    PubMed

    Durand, Casey P

    2013-01-01

    Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.

  2. Continuous-variable quantum computing on encrypted data.

    PubMed

    Marshall, Kevin; Jacobsen, Christian S; Schäfermeier, Clemens; Gehring, Tobias; Weedbrook, Christian; Andersen, Ulrik L

    2016-12-14

    The ability to perform computations on encrypted data is a powerful tool for protecting a client's privacy, especially in today's era of cloud and distributed computing. In terms of privacy, the best solutions that classical techniques can achieve are unfortunately not unconditionally secure in the sense that they are dependent on a hacker's computational power. Here we theoretically investigate, and experimentally demonstrate with Gaussian displacement and squeezing operations, a quantum solution that achieves the security of a user's privacy using the practical technology of continuous variables. We demonstrate losses of up to 10 km both ways between the client and the server and show that security can still be achieved. Our approach offers a number of practical benefits (from a quantum perspective) that could one day allow the potential widespread adoption of this quantum technology in future cloud-based computing networks.

  3. Continuous-variable quantum computing on encrypted data

    PubMed Central

    Marshall, Kevin; Jacobsen, Christian S.; Schäfermeier, Clemens; Gehring, Tobias; Weedbrook, Christian; Andersen, Ulrik L.

    2016-01-01

    The ability to perform computations on encrypted data is a powerful tool for protecting a client's privacy, especially in today's era of cloud and distributed computing. In terms of privacy, the best solutions that classical techniques can achieve are unfortunately not unconditionally secure in the sense that they are dependent on a hacker's computational power. Here we theoretically investigate, and experimentally demonstrate with Gaussian displacement and squeezing operations, a quantum solution that achieves the security of a user's privacy using the practical technology of continuous variables. We demonstrate losses of up to 10 km both ways between the client and the server and show that security can still be achieved. Our approach offers a number of practical benefits (from a quantum perspective) that could one day allow the potential widespread adoption of this quantum technology in future cloud-based computing networks. PMID:27966528

  4. Continuous-variable quantum computing on encrypted data

    NASA Astrophysics Data System (ADS)

    Marshall, Kevin; Jacobsen, Christian S.; Schäfermeier, Clemens; Gehring, Tobias; Weedbrook, Christian; Andersen, Ulrik L.

    2016-12-01

    The ability to perform computations on encrypted data is a powerful tool for protecting a client's privacy, especially in today's era of cloud and distributed computing. In terms of privacy, the best solutions that classical techniques can achieve are unfortunately not unconditionally secure in the sense that they are dependent on a hacker's computational power. Here we theoretically investigate, and experimentally demonstrate with Gaussian displacement and squeezing operations, a quantum solution that achieves the security of a user's privacy using the practical technology of continuous variables. We demonstrate losses of up to 10 km both ways between the client and the server and show that security can still be achieved. Our approach offers a number of practical benefits (from a quantum perspective) that could one day allow the potential widespread adoption of this quantum technology in future cloud-based computing networks.

  5. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE PAGES

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    2017-10-13

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  6. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  7. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis.

    PubMed

    Sakhanenko, Nikita A; Kunert-Graf, James; Galas, David J

    2017-12-01

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. We present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discrete variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis-that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. We illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.

  8. Timing at peak force may be the hidden target controlled in continuation and synchronization tapping.

    PubMed

    Du, Yue; Clark, Jane E; Whitall, Jill

    2017-05-01

    Timing control, such as producing movements at a given rate or synchronizing movements to an external event, has been studied through a finger-tapping task where timing is measured at the initial contact between finger and tapping surface or the point when a key is pressed. However, the point of peak force is after the time registered at the tapping surface and thus is a less obvious but still an important event during finger tapping. Here, we compared the time at initial contact with the time at peak force as participants tapped their finger on a force sensor at a given rate after the metronome was turned off (continuation task) or in synchrony with the metronome (sensorimotor synchronization task). We found that, in the continuation task, timing was comparably accurate between initial contact and peak force. These two timing events also exhibited similar trial-by-trial statistical dependence (i.e., lag-one autocorrelation). However, the central clock variability was lower at the peak force than the initial contact. In the synchronization task, timing control at peak force appeared to be less variable and more accurate than that at initial contact. In addition to lower central clock variability, the mean SE magnitude at peak force (SEP) was around zero while SE at initial contact (SEC) was negative. Although SEC and SEP demonstrated the same trial-by-trial statistical dependence, we found that participants adjusted the time of tapping to correct SEP, but not SEC, toward zero. These results suggest that timing at peak force is a meaningful target of timing control, particularly in synchronization tapping. This result may explain the fact that SE at initial contact is typically negative as widely observed in the preexisting literature.

  9. Measure-valued solutions to nonlocal transport equations on networks

    NASA Astrophysics Data System (ADS)

    Camilli, Fabio; De Maio, Raul; Tosin, Andrea

    2018-06-01

    Aiming to describe traffic flow on road networks with long-range driver interactions, we study a nonlinear transport equation defined on an oriented network where the velocity field depends not only on the state variable but also on the distribution of the population. We prove existence, uniqueness and continuous dependence results of the solution intended in a suitable measure-theoretic sense. We also provide a representation formula in terms of the push-forward of the initial and boundary data along the network and discuss an explicit example of nonlocal velocity field fitting our framework.

  10. Smooth time-dependent receiver operating characteristic curve estimators.

    PubMed

    Martínez-Camblor, Pablo; Pardo-Fernández, Juan Carlos

    2018-03-01

    The receiver operating characteristic curve is a popular graphical method often used to study the diagnostic capacity of continuous (bio)markers. When the considered outcome is a time-dependent variable, two main extensions have been proposed: the cumulative/dynamic receiver operating characteristic curve and the incident/dynamic receiver operating characteristic curve. In both cases, the main problem for developing appropriate estimators is the estimation of the joint distribution of the variables time-to-event and marker. As usual, different approximations lead to different estimators. In this article, the authors explore the use of a bivariate kernel density estimator which accounts for censored observations in the sample and produces smooth estimators of the time-dependent receiver operating characteristic curves. The performance of the resulting cumulative/dynamic and incident/dynamic receiver operating characteristic curves is studied by means of Monte Carlo simulations. Additionally, the influence of the choice of the required smoothing parameters is explored. Finally, two real-applications are considered. An R package is also provided as a complement to this article.

  11. Unequal-Strength Source zROC Slopes Reflect Criteria Placement and Not (Necessarily) Memory Processes

    ERIC Educational Resources Information Center

    Starns, Jeffrey J.; Pazzaglia, Angela M.; Rotello, Caren M.; Hautus, Michael J.; Macmillan, Neil A.

    2013-01-01

    Source memory zROC slopes change from below 1 to above 1 depending on which source gets the strongest learning. This effect has been attributed to memory processes, either in terms of a threshold source recollection process or changes in the variability of continuous source evidence. We propose 2 decision mechanisms that can produce the slope…

  12. Admissions and Plebe Year Data as Indicators of Academic Success in Engineering Majors at the United States Naval Academy

    DTIC Science & Technology

    2002-06-01

    3 = Divorced ) Number of Dependents Self-Explanatory Continuous Variable Related Job Experience Was job experience related to college program...Crawford Naval Postgraduate School Monterrey , CA 7. Professor Roger Little U. S. Naval Academy Annapolis, MD 8. LT Nicholas A. Kristof Chester, MD 9. Mr. and Mrs. Zoltan J. Kristof Pittsburgh, PA

  13. Finite element modeling of diffusion and partitioning in biological systems: the infinite composite medium problem.

    PubMed

    Missel, P J

    2000-01-01

    Four methods are proposed for modeling diffusion in heterogeneous media where diffusion and partition coefficients take on differing values in each subregion. The exercise was conducted to validate finite element modeling (FEM) procedures in anticipation of modeling drug diffusion with regional partitioning into ocular tissue, though the approach can be useful for other organs, or for modeling diffusion in laminate devices. Partitioning creates a discontinuous value in the dependent variable (concentration) at an intertissue boundary that is not easily handled by available general-purpose FEM codes, which allow for only one value at each node. The discontinuity is handled using a transformation on the dependent variable based upon the region-specific partition coefficient. Methods were evaluated by their ability to reproduce a known exact result, for the problem of the infinite composite medium (Crank, J. The Mathematics of Diffusion, 2nd ed. New York: Oxford University Press, 1975, pp. 38-39.). The most physically intuitive method is based upon the concept of chemical potential, which is continuous across an interphase boundary (method III). This method makes the equation of the dependent variable highly nonlinear. This can be linearized easily by a change of variables (method IV). Results are also given for a one-dimensional problem simulating bolus injection into the vitreous, predicting time disposition of drug in vitreous and retina.

  14. Temporal Variability of Daily Personal Magnetic Field Exposure Metrics in Pregnant Women

    PubMed Central

    Lewis, Ryan C.; Evenson, Kelly R.; Savitz, David A.; Meeker, John D.

    2015-01-01

    Recent epidemiology studies of power-frequency magnetic fields and reproductive health have characterized exposures using data collected from personal exposure monitors over a single day, possibly resulting in exposure misclassification due to temporal variability in daily personal magnetic field exposure metrics, but relevant data in adults are limited. We assessed the temporal variability of daily central tendency (time-weighted average, median) and peak (upper percentiles, maximum) personal magnetic field exposure metrics over seven consecutive days in 100 pregnant women. When exposure was modeled as a continuous variable, central tendency metrics had substantial reliability, whereas peak metrics had fair (maximum) to moderate (upper percentiles) reliability. The predictive ability of a single day metric to accurately classify participants into exposure categories based on a weeklong metric depended on the selected exposure threshold, with sensitivity decreasing with increasing exposure threshold. Consistent with the continuous measures analysis, sensitivity was higher for central tendency metrics than for peak metrics. If there is interest in peak metrics, more than one day of measurement is needed over the window of disease susceptibility to minimize measurement error, but one day may be sufficient for central tendency metrics. PMID:24691007

  15. Continuous salinity and temperature data from san francisco estuary, 19822002: Trends and the salinity-freshwater inflow relationship

    USGS Publications Warehouse

    Shellenbarger, G.G.; Schoellhamer, D.H.

    2011-01-01

    The U.S. Geological Survey and other federal and state agencies have been collecting continuous temperature and salinity data, two critical estuarine habitat variables, throughout San Francisco estuary for over two decades. Although this dynamic, highly variable system has been well studied, many questions remain relating to the effects of freshwater inflow and other physical and biological linkages. This study examines up to 20 years of publically available, continuous temperature and salinity data from 10 different San Francisco Bay stations to identify trends in temperature and salinity and quantify the salinityfreshwater inflow relationship. Several trends in the salinity and temperature records were identified, although the high degree of daily and interannual variability confounds the analysis. In addition, freshwater inflow to the estuary has a range of effects on salinity from -0.0020 to -0.0096 (m3 s-1) -1 discharge, depending on location in the estuary and the timescale of analyzed data. Finally, we documented that changes in freshwater inflow to the estuary that are within the range of typical management actions can affect bay-wide salinities by 0.61.4. This study reinforces the idea that multidecadal records are needed to identify trends from decadal changes in water management and climate and, therefore, are extremely valuable. ?? 2011 Coastal Education & Research Foundation.

  16. Entanglement transfer from two-mode continuous variable SU(2) cat states to discrete qubits systems in Jaynes-Cummings Dimers

    PubMed Central

    Ran, Du; Hu, Chang-Sheng; Yang, Zhen-Biao

    2016-01-01

    We study the entanglement transfer from a two-mode continuous variable system (initially in the two-mode SU(2) cat states) to a couple of discrete two-state systems (initially in an arbitrary mixed state), by use of the resonant Jaynes-Cummings (JC) interaction. We first quantitatively connect the entanglement transfer to non-Gaussianity of the two-mode SU(2) cat states and find a positive correlation between them. We then investigate the behaviors of the entanglement transfer and find that it is dependent on the initial state of the discrete systems. We also find that the largest possible value of the transferred entanglement exhibits a variety of behaviors for different photon number as well as for the phase angle of the two-mode SU(2) cat states. We finally consider the influences of the noise on the transferred entanglement. PMID:27553881

  17. Global solutions to random 3D vorticity equations for small initial data

    NASA Astrophysics Data System (ADS)

    Barbu, Viorel; Röckner, Michael

    2017-11-01

    One proves the existence and uniqueness in (Lp (R3)) 3, 3/2 < p < 2, of a global mild solution to random vorticity equations associated to stochastic 3D Navier-Stokes equations with linear multiplicative Gaussian noise of convolution type, for sufficiently small initial vorticity. This resembles some earlier deterministic results of T. Kato [16] and are obtained by treating the equation in vorticity form and reducing the latter to a random nonlinear parabolic equation. The solution has maximal regularity in the spatial variables and is weakly continuous in (L3 ∩L 3p/4p - 6)3 with respect to the time variable. Furthermore, we obtain the pathwise continuous dependence of solutions with respect to the initial data. In particular, one gets a locally unique solution of 3D stochastic Navier-Stokes equation in vorticity form up to some explosion stopping time τ adapted to the Brownian motion.

  18. Cardiorespiratory interaction with continuous positive airway pressure

    PubMed Central

    Bonafini, Sara; Fava, Cristiano; Steier, Joerg

    2018-01-01

    The treatment of choice for obstructive sleep apnoea (OSA) is continuous positive airway pressure therapy (CPAP). Since its introduction in clinical practice, CPAP has been used in various clinical conditions with variable and heterogeneous outcomes. In addition to the well-known effects on the upper airway CPAP impacts on intrathoracic pressures, haemodynamics and blood pressure (BP) control. However, short- and long-term effects of CPAP therapy depend on multiple variables which include symptoms, underlying condition, pressure used, treatment acceptance, compliance and usage. CPAP can alter long-term cardiovascular risk in patients with cardiorespiratory conditions. Furthermore, the effect of CPAP on the awake patient differs from the effect on the patients while asleep, and this might contribute to discomfort and removal of the use interface. The purpose of this review is to highlight the physiological impact of CPAP on the cardiorespiratory system, including short-term benefits and long-term outcomes. PMID:29445529

  19. Evaluation of continuous oxydesulfurization processes. Final technical report, September 1979-July 1981

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, J.F.; Wever, D.M.

    1981-07-01

    Three processes developed by Pittsburgh Energy Technology Center (PETC), Ledgemont Laboratories, and Ames Laboratories for the oxydesulfurization of coal were evaluated in continuous processing equipment designed, built, and/or adapted for the purpose at the DOE-owned Multi-Use Fuels and Energy Processes Test Plant (MEP) located at TRW's Capistrano Test Site in California. The three processes differed primarily in the chemical additives (none, sodium carbonate, or ammonia), fed to the 20% to 40% coal/water slurries, and in the oxygen content of the feed gas stream. Temperature, pressure, residence time, flow rates, slurry concentration and stirrer speed were the other primary independent variables.more » The amount of organic sulfur removed, total sulfur removed and the Btu recovery were the primary dependent variables. Evaluation of the data presented was not part of the test effort.« less

  20. Continuous-variable measurement-device-independent quantum key distribution: Composable security against coherent attacks

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano

    2018-05-01

    We present a rigorous security analysis of continuous-variable measurement-device-independent quantum key distribution (CV MDI QKD) in a finite-size scenario. The security proof is obtained in two steps: by first assessing the security against collective Gaussian attacks, and then extending to the most general class of coherent attacks via the Gaussian de Finetti reduction. Our result combines recent state-of-the-art security proofs for CV QKD with findings about min-entropy calculus and parameter estimation. In doing so, we improve the finite-size estimate of the secret key rate. Our conclusions confirm that CV MDI protocols allow for high rates on the metropolitan scale, and may achieve a nonzero secret key rate against the most general class of coherent attacks after 107-109 quantum signal transmissions, depending on loss and noise, and on the required level of security.

  1. Examination of Relationships among Organizational Characteristics and Organizational Commitment of Nurses in Western and Eastern Region of Nepal.

    PubMed

    Joshi, A S; Namba, M; Pokharela, T

    2015-01-01

    The objective of this study is to identify relationships between three components of organizational commitment and organizational characteristics of nurses in the western and the eastern region of Nepal. A self-administrated questionnaire was used to collect data from 310 nurses currently working at various hospitals in the eastern and the western region of the country. The questionnaire included three sections namely 1) personal characteristics 2) organizational characteristics and 3) organizational commitments scale. Descriptive analysis and multiple regression analysis were performed to identify significance in various relationships. Out of the 240 completed questionnaires, 226 were found valid for analysis. The mean age was 27.4 years. For each depended variable affective, continuance and normative commitment, multiple regression analysis was performed with personal Characteristics and organizational characteristics as independent variables. All independent variables were found significantly related to each of the two dependent variables; affective commitment and normative commitment (R2 adjusted=0.24, p<0.01 and R2 adjusted=0.05, p<0.01 respectively). However, they were not significantly related to the continuance commitment. Both support from boss (β=0.138, p<0.05) and satisfaction with training (β=0.301, p<0.05) were found to be positive and significant with affective commitment. On the other hand, satisfaction with training (β=0.191, p<0.05) was also positive and significant with normative commitment. Since both support from boss and training program were found to be positive and significant with affective commitment, hospitals must encourage supervisors to provide more assistance to the subordinate nurses. Moreover, hospitals should develop more training programs to keep nurses motivated.

  2. Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods.

    PubMed

    Gonzalez-Navarro, Felix F; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A; Flores-Rios, Brenda L; Ibarra-Esquer, Jorge E

    2016-10-26

    Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.

  3. Using LDR as Sensing Element for an External Fuzzy Controller Applied in Photovoltaic Pumping Systems with Variable-Speed Drives.

    PubMed

    Maranhão, Geraldo Neves De A; Brito, Alaan Ubaiara; Leal, Anderson Marques; Fonseca, Jéssica Kelly Silva; Macêdo, Wilson Negrão

    2015-09-22

    In the present paper, a fuzzy controller applied to a Variable-Speed Drive (VSD) for use in Photovoltaic Pumping Systems (PVPS) is proposed. The fuzzy logic system (FLS) used is embedded in a microcontroller and corresponds to a proportional-derivative controller. A Light-Dependent Resistor (LDR) is used to measure, approximately, the irradiance incident on the PV array. Experimental tests are executed using an Arduino board. The experimental results show that the fuzzy controller is capable of operating the system continuously throughout the day and controlling the direct current (DC) voltage level in the VSD with a good performance.

  4. A Spatially Continuous Model of Carbohydrate Digestion and Transport Processes in the Colon

    PubMed Central

    Moorthy, Arun S.; Brooks, Stephen P. J.; Kalmokoff, Martin; Eberl, Hermann J.

    2015-01-01

    A spatially continuous mathematical model of transport processes, anaerobic digestion and microbial complexity as would be expected in the human colon is presented. The model is a system of first-order partial differential equations with context determined number of dependent variables, and stiff, non-linear source terms. Numerical simulation of the model is used to elucidate information about the colon-microbiota complex. It is found that the composition of materials on outflow of the model does not well-describe the composition of material in other model locations, and inferences using outflow data varies according to model reactor representation. Additionally, increased microbial complexity allows the total microbial community to withstand major system perturbations in diet and community structure. However, distribution of strains and functional groups within the microbial community can be modified depending on perturbation length and microbial kinetic parameters. Preliminary model extensions and potential investigative opportunities using the computational model are discussed. PMID:26680208

  5. Reading Achievement: Characteristics Associated with Success and Failure: Abstracts of Doctoral Dissertations Published in "Dissertation Abstracts International," July through September 1978 (Vol. 39 Nos. 1 through 3).

    ERIC Educational Resources Information Center

    ERIC Clearinghouse on Reading and Communication Skills, Urbana, IL.

    This collection of abstracts is part of a continuing series providing information on recent doctoral dissertations. The 25 titles deal with a variety of topics, including the following: reading achievement as it relates to child dependency, the development of phonological coding, short-term memory and associative learning, variables available in…

  6. Further Results on Finite-Time Partial Stability and Stabilization. Applications to Nonlinear Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jammazi, Chaker

    2009-03-05

    The paper gives Lyapunov type sufficient conditions for partial finite-time and asymptotic stability in which some state variables converge to zero while the rest converge to constant values that possibly depend on the initial conditions. The paper then presents partially asymptotically stabilizing controllers for many nonlinear control systems for which continuous asymptotically stabilizing (in the usual sense) controllers are known not to exist.

  7. Hong-Ou-Mandel effect in terms of the temporal biphoton wave function with two arrival-time variables

    NASA Astrophysics Data System (ADS)

    Fedorov, M. V.; Sysoeva, A. A.; Vintskevich, S. V.; Grigoriev, D. A.

    2018-03-01

    The well-known Hong-Ou-Mandel effect is revisited. Two physical reasons are discussed for the effect to be less pronounced or even to disappear: differing polarizations of photons coming to the beamsplitter and delay time of photons in one of two channels. For the latter we use the concepts of biphoton frequency and temporal wave functions depending, correspondingly, on two frequency continuous variables of photons and on two time variables t 1 and t 2 interpreted as the arrival times of photons to the beamsplitter. Explicit expressions are found for the probability densities and total probabilities for photon pairs to be split between two channels after the beamsplitter and to be unsplit, when two photons appear together in one of two channels.

  8. Breaking Down the Coercive Cycle: How Parent and Child Risk Factors Influence Real-Time Variability in Parental Responses to Child Misbehavior

    PubMed Central

    Lunkenheimer, Erika; Lichtwarck-Aschoff, Anna; Hollenstein, Tom; Kemp, Christine J.; Granic, Isabela

    2016-01-01

    Objective Parent-child coercive cycles have been associated with both rigidity and inconsistency in parenting behavior. To explain these mixed findings, we examined real-time variability in maternal responses to children's off-task behavior to determine whether this common trigger of the coercive cycle (responding to child misbehavior) is associated with rigidity or inconsistency in parenting. We also examined the effects of risk factors for coercion (maternal hostility, maternal depressive symptoms, child externalizing problems, and dyadic negativity) on patterns of parenting. Design Mother-child dyads (N = 96; M child age = 41 months) completed a difficult puzzle task, and observations were coded continuously for parent (e.g., directive, teaching) and child behavior (e.g., on-task, off-task). Results Multilevel continuous-time survival analyses revealed that parenting behavior is less variable when children are off-task. However, when risk factors are higher, a different profile emerges. Combined maternal and child risk is associated with markedly lower variability in parenting behavior overall (i.e., rigidity) paired with shifts towards higher variability specifically when children are off-task (i.e., inconsistency). Dyadic negativity (i.e., episodes when children are off-task and parents engage in negative behavior) are also associated with higher parenting variability. Conclusions Risk factors confer rigidity in parenting overall, but in moments when higher-risk parents must respond to child misbehavior, their parenting becomes more variable, suggesting inconsistency and ineffectiveness. This context-dependent shift in parenting behavior may help explain prior mixed findings and offer new directions for family interventions designed to reduce coercive processes. PMID:28190978

  9. Heralded processes on continuous-variable spaces as quantum maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreyrol, Franck; Spagnolo, Nicolò; Blandino, Rémi

    2014-12-04

    Heralding processes, which only work when a measurement on a part of the system give the good result, are particularly interesting for continuous-variables. They permit non-Gaussian transformations that are necessary for several continuous-variable quantum information tasks. However if maps and quantum process tomography are commonly used to describe quantum transformations in discrete-variable space, they are much rarer in the continuous-variable domain. Also, no convenient tool for representing maps in a way more adapted to the particularities of continuous variables have yet been explored. In this paper we try to fill this gap by presenting such a tool.

  10. Nicotine dependence, PTSD symptoms, and depression proneness among male and female smokers.

    PubMed

    Thorndike, Frances P; Wernicke, Rachel; Pearlman, Michelle Y; Haaga, David A F

    2006-02-01

    Several studies have linked posttraumatic stress disorder with heavy smoking. It is not known to what extent this association is specific, as opposed to being a function of a joint association of PTSD and heavy smoking with a third variable such as depression proneness. In a cross-sectional study of 157 current regular smokers, severity of nicotine dependence (but not cigarettes smoked per day) was positively correlated with total PTSD symptoms, hyperarousal symptoms, and avoidance symptoms. These correlations were not eliminated by controlling statistically for depression vulnerability, whether it was measured on a continuous self-rating scale or on the basis of interview-diagnosed history of major depression. The association between PTSD and nicotine dependence was stronger among men than among women.

  11. Determination of riverbank erosion probability using Locally Weighted Logistic Regression

    NASA Astrophysics Data System (ADS)

    Ioannidou, Elena; Flori, Aikaterini; Varouchakis, Emmanouil A.; Giannakis, Georgios; Vozinaki, Anthi Eirini K.; Karatzas, George P.; Nikolaidis, Nikolaos

    2015-04-01

    Riverbank erosion is a natural geomorphologic process that affects the fluvial environment. The most important issue concerning riverbank erosion is the identification of the vulnerable locations. An alternative to the usual hydrodynamic models to predict vulnerable locations is to quantify the probability of erosion occurrence. This can be achieved by identifying the underlying relations between riverbank erosion and the geomorphological or hydrological variables that prevent or stimulate erosion. Thus, riverbank erosion can be determined by a regression model using independent variables that are considered to affect the erosion process. The impact of such variables may vary spatially, therefore, a non-stationary regression model is preferred instead of a stationary equivalent. Locally Weighted Regression (LWR) is proposed as a suitable choice. This method can be extended to predict the binary presence or absence of erosion based on a series of independent local variables by using the logistic regression model. It is referred to as Locally Weighted Logistic Regression (LWLR). Logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (e.g. binary response) based on one or more predictor variables. The method can be combined with LWR to assign weights to local independent variables of the dependent one. LWR allows model parameters to vary over space in order to reflect spatial heterogeneity. The probabilities of the possible outcomes are modelled as a function of the independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. erosion presence or absence) for any value of the independent variables. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested. The most straightforward measure for goodness of fit is the G statistic. It is a simple and effective way to study and evaluate the Logistic Regression model efficiency and the reliability of each independent variable. The developed statistical model is applied to the Koiliaris River Basin on the island of Crete, Greece. Two datasets of river bank slope, river cross-section width and indications of erosion were available for the analysis (12 and 8 locations). Two different types of spatial dependence functions, exponential and tricubic, were examined to determine the local spatial dependence of the independent variables at the measurement locations. The results show a significant improvement when the tricubic function is applied as the erosion probability is accurately predicted at all eight validation locations. Results for the model deviance show that cross-section width is more important than bank slope in the estimation of erosion probability along the Koiliaris riverbanks. The proposed statistical model is a useful tool that quantifies the erosion probability along the riverbanks and can be used to assist managing erosion and flooding events. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.

  12. Continuous variables logic via coupled automata using a DNAzyme cascade with feedback.

    PubMed

    Lilienthal, S; Klein, M; Orbach, R; Willner, I; Remacle, F; Levine, R D

    2017-03-01

    The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series.

  13. A comparison of independent depression and substance-induced depression in cannabis-, cocaine-, and opioid-dependent treatment seekers.

    PubMed

    Dakwar, Elias; Nunes, Edward V; Bisaga, Adam; Carpenter, Kenneth C; Mariani, John P; Sullivan, Maria A; Raby, Wilfrid N; Levin, Frances R

    2011-01-01

    Depressive symptoms often coexist with substance use disorders (SUDs). The DSM-IV has identified two distinct categories for depression coexisting with SUDs-independent depression and substance-induced depression. While this distinction has important therapeutic and prognostic implications, it remains difficult to make in clinical practice; the differentiation is often guided by chronological and symptom severity criteria that patients may be unable to precisely provide. Furthermore, it is unclear whether the various substances commonly abused-cannabis, cocaine, and opioids-are equally associated with the two types of depression. Predictors, associations, and other markers may be helpful in guiding the diagnostic process. We, therefore, examined the differences between cannabis-, cocaine-, and opioid-dependent individuals contending with independent depression and those contending with substance-induced depression in regard to several variables, hypothesizing that independent depression is more commonly found in females, and that it is associated with higher symptom severity and psychiatric comorbidity. Cocaine-, cannabis-, and/or opioid-dependent, treatment-seeking individuals underwent a structured clinical interview for DSM-IV-TR disorders after providing consent at our clinical research site; those with co-existing primary depression or substance-induced depression diagnoses were provided with further questionnaires and were entered into this analysis (n= 242). Pair-wise comparisons were conducted between the groups classified as independent versus substance-induced depression with 2-by-2 tables and chi-square tests for dichotomous independent variables, and t-tests for continuous variables. Binomial logistic regression was performed in order to ascertain which of the variables were significant predictors. Women were more likely than men to have independent depression (p< .005). Cannabis dependence was highly associated with independent depression (p< .001), while cocaine dependence was highly associated with substance-induced depression (p< .05). Independent depression was associated with higher Hamilton depression scale scores (16 vs. 10, p< .005), and was more highly associated with the comorbid diagnosis of posttraumatic stress disorder (p< .05). Cannabis dependence (p< .001) and female gender (p< .05) were highly significant predictors of major depression specifically. Gender, cannabis dependence, psychiatric severity, and psychiatric comorbidity have variable, statistically significant associations with independent and substance-induced depression, and may be helpful in guiding the diagnostic process. © American Academy of Addiction Psychiatry.

  14. Modelling Soil-Landscapes in Coastal California Hills Using Fine Scale Terrestrial Lidar

    NASA Astrophysics Data System (ADS)

    Prentice, S.; Bookhagen, B.; Kyriakidis, P. C.; Chadwick, O.

    2013-12-01

    Digital elevation models (DEMs) are the dominant input to spatially explicit digital soil mapping (DSM) efforts due to their increasing availability and the tight coupling between topography and soil variability. Accurate characterization of this coupling is dependent on DEM spatial resolution and soil sampling density, both of which may limit analyses. For example, DEM resolution may be too coarse to accurately reflect scale-dependent soil properties yet downscaling introduces artifactual uncertainty unrelated to deterministic or stochastic soil processes. We tackle these limitations through a DSM effort that couples moderately high density soil sampling with a very fine scale terrestrial lidar dataset (20 cm) implemented in a semiarid rolling hillslope domain where terrain variables change rapidly but smoothly over short distances. Our guiding hypothesis is that in this diffusion-dominated landscape, soil thickness is readily predicted by continuous terrain attributes coupled with catenary hillslope segmentation. We choose soil thickness as our keystone dependent variable for its geomorphic and hydrologic significance, and its tendency to be a primary input to synthetic ecosystem models. In defining catenary hillslope position we adapt a logical rule-set approach that parses common terrain derivatives of curvature and specific catchment area into discrete landform elements (LE). Variograms and curvature-area plots are used to distill domain-scale terrain thresholds from short range order noise characteristic of very fine-scale spatial data. The revealed spatial thresholds are used to condition LE rule-set inputs, rendering a catenary LE map that leverages the robustness of fine-scale terrain data to create a generalized interpretation of soil geomorphic domains. Preliminary regressions show that continuous terrain variables alone (curvature, specific catchment area) only partially explain soil thickness, and only in a subset of soils. For example, at spatial scales up 20, curvature explains 40% of soil thickness variance among soils <3 m deep, while soils >3 m deep show no clear relation to curvature. To further demonstration our geomorphic segmentation approach, we apply it to DEM domains where diffusion processes are less dominant than in our primary study area. Classified landform map derived from fine scale terrestrial lidar. Color classes depict hydrogeomorphic process domains in zero order watersheds.

  15. Hydration level is an internal variable for computing motivation to obtain water rewards in monkeys.

    PubMed

    Minamimoto, Takafumi; Yamada, Hiroshi; Hori, Yukiko; Suhara, Tetsuya

    2012-05-01

    In the process of motivation to engage in a behavior, valuation of the expected outcome is comprised of not only external variables (i.e., incentives) but also internal variables (i.e., drive). However, the exact neural mechanism that integrates these variables for the computation of motivational value remains unclear. Besides, the signal of physiological needs, which serves as the primary internal variable for this computation, remains to be identified. Concerning fluid rewards, the osmolality level, one of the physiological indices for the level of thirst, may be an internal variable for valuation, since an increase in the osmolality level induces drinking behavior. Here, to examine the relationship between osmolality and the motivational value of a water reward, we repeatedly measured the blood osmolality level, while 2 monkeys continuously performed an instrumental task until they spontaneously stopped. We found that, as the total amount of water earned increased, the osmolality level progressively decreased (i.e., the hydration level increased) in an individual-dependent manner. There was a significant negative correlation between the error rate of the task (the proportion of trials with low motivation) and the osmolality level. We also found that the increase in the error rate with reward accumulation can be well explained by a formula describing the changes in the osmolality level. These results provide a biologically supported computational formula for the motivational value of a water reward that depends on the hydration level, enabling us to identify the neural mechanism that integrates internal and external variables.

  16. Let me hear of your mercy in the mourning: forgiveness, grief, and continuing bonds.

    PubMed

    Gassin, Elizabeth A; Lengel, Gregory J

    2014-01-01

    Clarity about the utility of continuing bonds (CB) continues to be evasive in the research. In 2 different correlational studies, the authors explored the relationship between CB and 2 other variables: 1 representing mental health (forgiveness of the deceased) and the other representing psychological distress (prolonged grief). Although researchers have addressed the latter relationship in the literature, assessing the relationship between CB and forgiveness has not been undertaken. Results suggest that forgiveness in general, and affective aspects of forgiveness in particular, predict psychological forms of CB. Results related to grief depended on how CB was assessed. These findings provide evidence of the relative health of certain types of relationship with deceased persons and also suggest that forgiveness interventions may be a way of promoting such healthy bonds.

  17. Five-city economics of a solar hot-water-system

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Report projects energy savings and system costs for five sites using analysis of actual solar energy installation performance in Togus, Maine. Maine system supplies 75 percent of hot water needed for single-family residence; economic payback period is 19 years. Benefits for all sites depend on maintenance or decrease of initial investment required and continuing increase in cost of conventional energy. Report includes analysis weighing potential changes in variables used to evaluate system profitability.

  18. The vertical dependence in the horizontal variability of salinity and temperature at the ocean surface

    NASA Astrophysics Data System (ADS)

    Asher, W.; Drushka, K.; Jessup, A. T.; Clark, D.

    2016-02-01

    Satellite-mounted microwave radiometers measure sea surface salinity (SSS) as an area-averaged quantity in the top centimeter of the ocean over the footprint of the instrument. If the horizontal variability in SSS is large inside this footprint, sub-grid-scale variability in SSS can affect comparison of the satellite-retrieved SSS with in situ measurements. Understanding the magnitude of horizontal variability in SSS over spatial scales that are relevant to the satellite measurements is therefore important. Horizontal variability of SSS at the ocean surface can be studied in situ using data recorded by thermosalinographs (TSGs) that sample water from a depth of a few meters. However, it is possible measurements made at this depth might underestimate the horizontal variability at the surface because salinity and temperature can become vertically stratified in a very near surface layer due to the effects of rain, solar heating, and evaporation. This vertical stratification could prevent horizontal gradients from propagating to the sampling depths of ship-mounted TSGs. This presentation will discuss measurements made using an underway salinity profiling system installed on the R/V Thomas Thompson that made continuous measurements of SSS and SST in the Pacific Ocean. The system samples at nominal depths of 2-m, 3-m, and 5-m, allowing the depth dependence of the horizontal variability in SSS and SST to be measured. Horizontal variability in SST is largest at low wind speeds during daytime, when a diurnal warm layer forms. In contrast, the diurnal signal in the variability of SSS was smaller with variability being slightly larger at night. When studied as a function of depth, the results show that over 100-km scales, the horizontal variability in both SSS and SST at a depth of 2 m is approximately a factor of 4 higher than the variability at 5 m.

  19. [Completeness of notifications of violence perpetrated against adolescents in the State of Pernambuco, Brazil].

    PubMed

    Santos, Taciana Mirella Batista Dos; Cardoso, Mirian Domingos; Pitangui, Ana Carolina Rodarti; Santos, Yasmim Gabriella Cardoso; Paiva, Saul Martins; Melo, João Paulo Ramos; Silva, Lygia Maria Pereira

    2016-12-01

    The scope of this study was to analyze the trend of completeness of the data on violence perpetrated against adolescents registered in the State of Pernambuco between 2009 and 2012. This involved a cross-sectional survey of 5,259 adolescents, who were the victims of violence reported in SINAN-VIVA of the Pernambuco State Health Department. Simple linear regression was used to investigate the trend of completeness of the variables. The percentages of completeness were considered to be dependent variables (Y) and the number of years as independent variables (X). The results show a significant increase of 204% in the number of notifications. However, of the 34 variables analyzed, 27 (79.4%) showed a stationary trend, 6 (17.6%) a downward trend, and only one variable (2.9%) an upward trend. Completeness was considered 'Very Poor' for the variables: Education (47.3%), Full Address (21.3%), Occurrence Time (38%) and Use of Alcohol by the Attacker (47%). Therefore, despite the large increase in the number of notifications, data quality continued to be compromised, hampering a more realistic analysis of this group.

  20. Parameter Estimation with Almost No Public Communication for Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano

    2018-06-01

    One crucial step in any quantum key distribution (QKD) scheme is parameter estimation. In a typical QKD protocol the users have to sacrifice part of their raw data to estimate the parameters of the communication channel as, for example, the error rate. This introduces a trade-off between the secret key rate and the accuracy of parameter estimation in the finite-size regime. Here we show that continuous-variable QKD is not subject to this constraint as the whole raw keys can be used for both parameter estimation and secret key generation, without compromising the security. First, we show that this property holds for measurement-device-independent (MDI) protocols, as a consequence of the fact that in a MDI protocol the correlations between Alice and Bob are postselected by the measurement performed by an untrusted relay. This result is then extended beyond the MDI framework by exploiting the fact that MDI protocols can simulate device-dependent one-way QKD with arbitrarily high precision.

  1. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    NASA Astrophysics Data System (ADS)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  2. Continuous-variable quantum homomorphic signature

    NASA Astrophysics Data System (ADS)

    Li, Ke; Shang, Tao; Liu, Jian-wei

    2017-10-01

    Quantum cryptography is believed to be unconditionally secure because its security is ensured by physical laws rather than computational complexity. According to spectrum characteristic, quantum information can be classified into two categories, namely discrete variables and continuous variables. Continuous-variable quantum protocols have gained much attention for their ability to transmit more information with lower cost. To verify the identities of different data sources in a quantum network, we propose a continuous-variable quantum homomorphic signature scheme. It is based on continuous-variable entanglement swapping and provides additive and subtractive homomorphism. Security analysis shows the proposed scheme is secure against replay, forgery and repudiation. Even under nonideal conditions, it supports effective verification within a certain verification threshold.

  3. Artificial Intelligence vs. Statistical Modeling and Optimization of Continuous Bead Milling Process for Bacterial Cell Lysis.

    PubMed

    Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A; Soni, Nipunjot; Mandal, Raju K; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y; Govender, Thavendran; Kruger, Hendrik G; Jawed, Arshad

    2016-01-01

    For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD 600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD 600 nm ): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties.

  4. Artificial Intelligence vs. Statistical Modeling and Optimization of Continuous Bead Milling Process for Bacterial Cell Lysis

    PubMed Central

    Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A.; Soni, Nipunjot; Mandal, Raju K.; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y.; Govender, Thavendran; Kruger, Hendrik G.; Jawed, Arshad

    2016-01-01

    For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD600 nm): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties. PMID:27920762

  5. Using LDR as Sensing Element for an External Fuzzy Controller Applied in Photovoltaic Pumping Systems with Variable-Speed Drives

    PubMed Central

    Maranhão, Geraldo Neves De A.; Brito, Alaan Ubaiara; Leal, Anderson Marques; Fonseca, Jéssica Kelly Silva; Macêdo, Wilson Negrão

    2015-01-01

    In the present paper, a fuzzy controller applied to a Variable-Speed Drive (VSD) for use in Photovoltaic Pumping Systems (PVPS) is proposed. The fuzzy logic system (FLS) used is embedded in a microcontroller and corresponds to a proportional-derivative controller. A Light-Dependent Resistor (LDR) is used to measure, approximately, the irradiance incident on the PV array. Experimental tests are executed using an Arduino board. The experimental results show that the fuzzy controller is capable of operating the system continuously throughout the day and controlling the direct current (DC) voltage level in the VSD with a good performance. PMID:26402688

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nottale, Laurent; Célérier, Marie-Noëlle

    One of the main results of scale relativity as regards the foundation of quantum mechanics is its explanation of the origin of the complex nature of the wave function. The scale relativity theory introduces an explicit dependence of physical quantities on scale variables, founding itself on the theorem according to which a continuous and non-differentiable space-time is fractal (i.e., scale-divergent). In the present paper, the nature of the scale variables and their relations to resolutions and differential elements are specified in the non-relativistic case (fractal space). We show that, owing to the scale-dependence which it induces, non-differentiability involves a fundamentalmore » two-valuedness of the mean derivatives. Since, in the scale relativity framework, the wave function is a manifestation of the velocity field of fractal space-time geodesics, the two-valuedness of velocities leads to write them in terms of complex numbers, and yields therefore the complex nature of the wave function, from which the usual expression of the Schrödinger equation can be derived.« less

  7. Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods †

    PubMed Central

    Gonzalez-Navarro, Felix F.; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A.; Flores-Rios, Brenda L.; Ibarra-Esquer, Jorge E.

    2016-01-01

    Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization. PMID:27792165

  8. Variability of adjacency effects in sky reflectance measurements.

    PubMed

    Groetsch, Philipp M M; Gege, Peter; Simis, Stefan G H; Eleveld, Marieke A; Peters, Steef W M

    2017-09-01

    Sky reflectance R sky (λ) is used to correct in situ reflectance measurements in the remote detection of water color. We analyzed the directional and spectral variability in R sky (λ) due to adjacency effects against an atmospheric radiance model. The analysis is based on one year of semi-continuous R sky (λ) observations that were recorded in two azimuth directions. Adjacency effects contributed to R sky (λ) dependence on season and viewing angle and predominantly in the near-infrared (NIR). For our test area, adjacency effects spectrally resembled a generic vegetation spectrum. The adjacency effect was weakly dependent on the magnitude of Rayleigh- and aerosol-scattered radiance. The reflectance differed between viewing directions 5.4±6.3% for adjacency effects and 21.0±19.8% for Rayleigh- and aerosol-scattered R sky (λ) in the NIR. Under which conditions in situ water reflectance observations require dedicated correction for adjacency effects is discussed. We provide an open source implementation of our method to aid identification of such conditions.

  9. The health of U.S. agricultural worker families: A descriptive study of over 790,000 migratory and seasonal agricultural workers and dependents.

    PubMed

    Boggess, Bethany; Bogue, Hilda Ochoa

    2016-01-01

    Migratory and seasonal agricultural workers (MSAWs) are a historically under-served population that experience poor access to health care. The aim of this study was to describe the demographic, socioeconomic, and health status of U.S. agricultural workers and their dependents who were patients of a Migrant Health Center in 2012. The authors used the Uniform Data System to examine demographic, socioeconomic, and health variables for 793,188 patients of 164 Migrant Health Centers during 2012. Means, proportions, and period prevalence was calculated for all variables. Results showed that 80% of MSAWs earned family incomes below 100% of federal poverty level. Among the reported diagnoses, the most common were hypertension, diabetes mellitus, and mental health conditions. Fifty-three percent of all MSAWs and 71% of adult MSAWs were uninsured, indicating that Migrants Health Centers continue to play a vital role in providing access to primary health care for MSAWs and their families.

  10. Shiftwork in the Norwegian petroleum industry: overcoming difficulties with family and social life – a cross sectional study

    PubMed Central

    Ljoså, Cathrine Haugene; Lau, Bjørn

    2009-01-01

    Background Continuous shift schedules are required in the petroleum industry because of its dependency on uninterrupted production. Although shiftwork affects health, less is known about its effects on social and domestic life. Methods Consequently, we studied these relationships in a sample of 1697 (response rate 55.9%) petroleum workers who worked onshore and offshore for a Norwegian oil and gas company. We also examined the roles of coping strategies and locus of control for handling self-reported problems with social and domestic life. A questionnaire containing scales from the Standard Shiftwork Index and Shiftwork Locus of Control was answered electronically. Results In general, only a few participants reported that their shift schedule affected their social and domestic/family life, and several participants had enough time to spend by themselves and with their partner, close family, friends, and children. Despite this general positive trend, differences were found for shift type and individual factors such as locus of control and coping strategies. Internal locus of control was associated positively with all the dependent variables. However, engaging problem-focused coping strategies were associated only slightly with the dependent variables, while disengaging emotion-focused coping strategies were negatively associated with the dependent variables. Conclusion Since most participants reported few problems with social and domestic/family life, the availability of more leisure time may be a positive feature of shiftwork in the Norwegian petroleum industry. Locus of control and the use of coping strategies were important for shiftworkers' social and domestic/family life. PMID:19650903

  11. Evaluating shrub-associated spatial patterns of soil properties in a shrub-steppe ecosystem using multiple-variable geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halvorson, J.J.; Smith, J.L.; Bolton, H. Jr.

    1995-09-01

    Geostatistics are often calculated for a single variable at a time, even though many natural phenomena are functions of several variables. The objective of this work was to demonstrate a nonparametric approach for assessing the spatial characteristics of multiple-variable phenomena. Specifically, we analyzed the spatial characteristics of resource islands in the soil under big sagebrush (Artemisia tridentala Nutt.), a dominant shrub in the intermountain western USA. For our example, we defined resource islands as a function of six soil variables representing concentrations of soil resources, populations of microorganisms, and soil microbial physiological variables. By collectively evaluating the indicator transformations ofmore » these individual variables, we created a new data set, termed a multiple-variable indicator transform or MVIT. Alternate MVITs were obtained by varying the selection criteria. Each MVIT was analyzed with variography to characterize spatial continuity, and with indicator kriging to predict the combined probability of their occurrence at unsampled locations in the landscape. Simple graphical analysis and variography demonstrated spatial dependence for all individual soil variables. Maps derived from ordinary kriging of MVITs suggested that the combined probabilities for encountering zones of above-median resources were greatest near big sagebrush. 51 refs., 5 figs., 1 tab.« less

  12. The intermediate endpoint effect in logistic and probit regression

    PubMed Central

    MacKinnon, DP; Lockwood, CM; Brown, CH; Wang, W; Hoffman, JM

    2010-01-01

    Background An intermediate endpoint is hypothesized to be in the middle of the causal sequence relating an independent variable to a dependent variable. The intermediate variable is also called a surrogate or mediating variable and the corresponding effect is called the mediated, surrogate endpoint, or intermediate endpoint effect. Clinical studies are often designed to change an intermediate or surrogate endpoint and through this intermediate change influence the ultimate endpoint. In many intermediate endpoint clinical studies the dependent variable is binary, and logistic or probit regression is used. Purpose The purpose of this study is to describe a limitation of a widely used approach to assessing intermediate endpoint effects and to propose an alternative method, based on products of coefficients, that yields more accurate results. Methods The intermediate endpoint model for a binary outcome is described for a true binary outcome and for a dichotomization of a latent continuous outcome. Plots of true values and a simulation study are used to evaluate the different methods. Results Distorted estimates of the intermediate endpoint effect and incorrect conclusions can result from the application of widely used methods to assess the intermediate endpoint effect. The same problem occurs for the proportion of an effect explained by an intermediate endpoint, which has been suggested as a useful measure for identifying intermediate endpoints. A solution to this problem is given based on the relationship between latent variable modeling and logistic or probit regression. Limitations More complicated intermediate variable models are not addressed in the study, although the methods described in the article can be extended to these more complicated models. Conclusions Researchers are encouraged to use an intermediate endpoint method based on the product of regression coefficients. A common method based on difference in coefficient methods can lead to distorted conclusions regarding the intermediate effect. PMID:17942466

  13. Mechanics of deformations in terms of scalar variables

    NASA Astrophysics Data System (ADS)

    Ryabov, Valeriy A.

    2017-05-01

    Theory of particle and continuous mechanics is developed which allows a treatment of pure deformation in terms of the set of variables "coordinate-momentum-force" instead of the standard treatment in terms of tensor-valued variables "strain-stress." This approach is quite natural for a microscopic description of atomic system, according to which only pointwise forces caused by the stress act to atoms making a body deform. The new concept starts from affine transformation of spatial to material coordinates in terms of the stretch tensor or its analogs. Thus, three principal stretches and three angles related to their orientation form a set of six scalar variables to describe deformation. Instead of volume-dependent potential used in the standard theory, which requires conditions of equilibrium for surface and body forces acting to a volume element, a potential dependent on scalar variables is introduced. A consistent introduction of generalized force associated with this potential becomes possible if a deformed body is considered to be confined on the surface of torus having six genuine dimensions. Strain, constitutive equations and other fundamental laws of the continuum and particle mechanics may be neatly rewritten in terms of scalar variables. Giving a new presentation for finite deformation new approach provides a full treatment of hyperelasticity including anisotropic case. Derived equations of motion generate a new kind of thermodynamical ensemble in terms of constant tension forces. In this ensemble, six internal deformation forces proportional to the components of Irving-Kirkwood stress are controlled by applied external forces. In thermodynamical limit, instead of the pressure and volume as state variables, this ensemble employs deformation force measured in kelvin unit and stretch ratio.

  14. Predicted stand volume for Eucalyptus plantations by spatial analysis

    NASA Astrophysics Data System (ADS)

    Latifah, Siti; Teodoro, RV; Myrna, GC; Nathaniel, CB; Leonardo, M. F.

    2018-03-01

    The main objective of the present study was to assess nonlinear models generated by integrating the stand volume growth rate to estimate the growth and yield of Eucalyptus. The primary data was done for point of interest (POI) of permanent sample plots (PSPs) and inventory sample plots, in Aek Nauli sector, Simalungun regency,North Sumatera Province,Indonesia. from December 2008- March 2009. Today,the demand for forestry information has continued to grow over recent years. Because many forest managers and decision makers face complex decisions, reliable information has become the necessity. In the assessment of natural resources including plantation forests have been widely used geospatial technology.The yield of Eucalyptus plantations represented by merchantable volume as dependent variable while factors affecting yield namely stands variables and the geographic variables as independent variables. The majority of the areas in the study site has stand volume class 0 - 50 m3/ha with 16.59 ha or 65.85 % of the total study site.

  15. Forced Climate Changes in West Antarctica and the Indo-Pacific by Northern Hemisphere Ice Sheet Topography

    NASA Astrophysics Data System (ADS)

    Jones, T. R.; Roberts, W. H. G.; Steig, E. J.; Cuffey, K. M.; Markle, B. R.; White, J. W. C.

    2017-12-01

    The behavior of the Indo-Pacific climate system across the last deglaciation is widely debated. Resolving these debates requires long term and continuous climate proxy records. Here, we use an ultra-high resolution and continuous water isotope record from an ice core in the Pacific sector of West Antarctica. In conjunction with the HadCM3 coupled ocean-atmosphere GCM, we demonstrate that the climate of both West Antarctica and the Indo-Pacific were substantially altered during the last deglaciation by the same forcing mechanism. Critically, these changes are not dependent on ENSO strength, but rather the location of deep tropical convection, which shifts at 16 ka in response to climate perturbations induced by the Laurentide Ice Sheet. The changed rainfall patterns in the tropics explain the deglacial shift from expanded-grasslands to rainforest-dominated ecosystems in Indonesia. High-frequency climate variability in the Southern Hemisphere is also changed, through a tropical Pacific teleconnection link dependent on the propogration of Rossby Waves.

  16. Recovery of rhythmic activity in a central pattern generator: analysis of the role of neuromodulator and activity-dependent mechanisms.

    PubMed

    Zhang, Yili; Golowasch, Jorge

    2011-11-01

    The pyloric network of decapods crustaceans can undergo dramatic rhythmic activity changes. Under normal conditions the network generates low frequency rhythmic activity that depends obligatorily on the presence of neuromodulatory input from the central nervous system. When this input is removed (decentralization) the rhythmic activity ceases. In the continued absence of this input, periodic activity resumes after a few hours in the form of episodic bursting across the entire network that later turns into stable rhythmic activity that is nearly indistinguishable from control (recovery). It has been proposed that an activity-dependent modification of ionic conductance levels in the pyloric pacemaker neuron drives the process of recovery of activity. Previous modeling attempts have captured some aspects of the temporal changes observed experimentally, but key features could not be reproduced. Here we examined a model in which slow activity-dependent regulation of ionic conductances and slower neuromodulator-dependent regulation of intracellular Ca(2+) concentration reproduce all the temporal features of this recovery. Key aspects of these two regulatory mechanisms are their independence and their different kinetics. We also examined the role of variability (noise) in the activity-dependent regulation pathway and observe that it can help to reduce unrealistic constraints that were otherwise required on the neuromodulator-dependent pathway. We conclude that small variations in intracellular Ca(2+) concentration, a Ca(2+) uptake regulation mechanism that is directly targeted by neuromodulator-activated signaling pathways, and variability in the Ca(2+) concentration sensing signaling pathway can account for the observed changes in neuronal activity. Our conclusions are all amenable to experimental analysis.

  17. Universal analytical scattering form factor for shell-, core-shell, or homogeneous particles with continuously variable density profile shape.

    PubMed

    Foster, Tobias

    2011-09-01

    A novel analytical and continuous density distribution function with a widely variable shape is reported and used to derive an analytical scattering form factor that allows us to universally describe the scattering from particles with the radial density profile of homogeneous spheres, shells, or core-shell particles. Composed by the sum of two Fermi-Dirac distribution functions, the shape of the density profile can be altered continuously from step-like via Gaussian-like or parabolic to asymptotically hyperbolic by varying a single "shape parameter", d. Using this density profile, the scattering form factor can be calculated numerically. An analytical form factor can be derived using an approximate expression for the original Fermi-Dirac distribution function. This approximation is accurate for sufficiently small rescaled shape parameters, d/R (R being the particle radius), up to values of d/R ≈ 0.1, and thus captures step-like, Gaussian-like, and parabolic as well as asymptotically hyperbolic profile shapes. It is expected that this form factor is particularly useful in a model-dependent analysis of small-angle scattering data since the applied continuous and analytical function for the particle density profile can be compared directly with the density profile extracted from the data by model-free approaches like the generalized inverse Fourier transform method. © 2011 American Chemical Society

  18. Uncertainty of streamwater solute fluxes in five contrasting headwater catchments including model uncertainty and natural variability (Invited)

    NASA Astrophysics Data System (ADS)

    Aulenbach, B. T.; Burns, D. A.; Shanley, J. B.; Yanai, R. D.; Bae, K.; Wild, A.; Yang, Y.; Dong, Y.

    2013-12-01

    There are many sources of uncertainty in estimates of streamwater solute flux. Flux is the product of discharge and concentration (summed over time), each of which has measurement uncertainty of its own. Discharge can be measured almost continuously, but concentrations are usually determined from discrete samples, which increases uncertainty dependent on sampling frequency and how concentrations are assigned for the periods between samples. Gaps between samples can be estimated by linear interpolation or by models that that use the relations between concentration and continuously measured or known variables such as discharge, season, temperature, and time. For this project, developed in cooperation with QUEST (Quantifying Uncertainty in Ecosystem Studies), we evaluated uncertainty for three flux estimation methods and three different sampling frequencies (monthly, weekly, and weekly plus event). The constituents investigated were dissolved NO3, Si, SO4, and dissolved organic carbon (DOC), solutes whose concentration dynamics exhibit strongly contrasting behavior. The evaluation was completed for a 10-year period at five small, forested watersheds in Georgia, New Hampshire, New York, Puerto Rico, and Vermont. Concentration regression models were developed for each solute at each of the three sampling frequencies for all five watersheds. Fluxes were then calculated using (1) a linear interpolation approach, (2) a regression-model method, and (3) the composite method - which combines the regression-model method for estimating concentrations and the linear interpolation method for correcting model residuals to the observed sample concentrations. We considered the best estimates of flux to be derived using the composite method at the highest sampling frequencies. We also evaluated the importance of sampling frequency and estimation method on flux estimate uncertainty; flux uncertainty was dependent on the variability characteristics of each solute and varied for different reporting periods (e.g. 10-year, study period vs. annually vs. monthly). The usefulness of the two regression model based flux estimation approaches was dependent upon the amount of variance in concentrations the regression models could explain. Our results can guide the development of optimal sampling strategies by weighing sampling frequency with improvements in uncertainty in stream flux estimates for solutes with particular characteristics of variability. The appropriate flux estimation method is dependent on a combination of sampling frequency and the strength of concentration regression models. Sites: Biscuit Brook (Frost Valley, NY), Hubbard Brook Experimental Forest and LTER (West Thornton, NH), Luquillo Experimental Forest and LTER (Luquillo, Puerto Rico), Panola Mountain (Stockbridge, GA), Sleepers River Research Watershed (Danville, VT)

  19. Statistical Methods for Quality Control of Steel Coils Manufacturing Process using Generalized Linear Models

    NASA Astrophysics Data System (ADS)

    García-Díaz, J. Carlos

    2009-11-01

    Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.

  20. Mutilating Data and Discarding Variance: The Dangers of Dichotomizing Continuous Variables.

    ERIC Educational Resources Information Center

    Kroff, Michael W.

    This paper reviews issues involved in converting continuous variables to nominal variables to be used in the OVA techniques. The literature dealing with the dangers of dichotomizing continuous variables is reviewed. First, the assumptions invoked by OVA analyses are reviewed in addition to concerns regarding the loss of variance and a reduction in…

  1. Use of generalised additive models to categorise continuous variables in clinical prediction

    PubMed Central

    2013-01-01

    Background In medical practice many, essentially continuous, clinical parameters tend to be categorised by physicians for ease of decision-making. Indeed, categorisation is a common practice both in medical research and in the development of clinical prediction rules, particularly where the ensuing models are to be applied in daily clinical practice to support clinicians in the decision-making process. Since the number of categories into which a continuous predictor must be categorised depends partly on the relationship between the predictor and the outcome, the need for more than two categories must be borne in mind. Methods We propose a categorisation methodology for clinical-prediction models, using Generalised Additive Models (GAMs) with P-spline smoothers to determine the relationship between the continuous predictor and the outcome. The proposed method consists of creating at least one average-risk category along with high- and low-risk categories based on the GAM smooth function. We applied this methodology to a prospective cohort of patients with exacerbated chronic obstructive pulmonary disease. The predictors selected were respiratory rate and partial pressure of carbon dioxide in the blood (PCO2), and the response variable was poor evolution. An additive logistic regression model was used to show the relationship between the covariates and the dichotomous response variable. The proposed categorisation was compared to the continuous predictor as the best option, using the AIC and AUC evaluation parameters. The sample was divided into a derivation (60%) and validation (40%) samples. The first was used to obtain the cut points while the second was used to validate the proposed methodology. Results The three-category proposal for the respiratory rate was ≤ 20;(20,24];> 24, for which the following values were obtained: AIC=314.5 and AUC=0.638. The respective values for the continuous predictor were AIC=317.1 and AUC=0.634, with no statistically significant differences being found between the two AUCs (p =0.079). The four-category proposal for PCO2 was ≤ 43;(43,52];(52,65];> 65, for which the following values were obtained: AIC=258.1 and AUC=0.81. No statistically significant differences were found between the AUC of the four-category option and that of the continuous predictor, which yielded an AIC of 250.3 and an AUC of 0.825 (p =0.115). Conclusions Our proposed method provides clinicians with the number and location of cut points for categorising variables, and performs as successfully as the original continuous predictor when it comes to developing clinical prediction rules. PMID:23802742

  2. Filling the gaps in meteorological continuous data measured at FLUXNET sites with ERA-Interim reanalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vuichard, N.; Papale, D.

    In this study, exchanges of carbon, water and energy between the land surface and the atmosphere are monitored by eddy covariance technique at the ecosystem level. Currently, the FLUXNET database contains more than 500 registered sites, and up to 250 of them share data (free fair-use data set). Many modelling groups use the FLUXNET data set for evaluating ecosystem models' performance, but this requires uninterrupted time series for the meteorological variables used as input. Because original in situ data often contain gaps, from very short (few hours) up to relatively long (some months) ones, we develop a new and robustmore » method for filling the gaps in meteorological data measured at site level. Our approach has the benefit of making use of continuous data available globally (ERA-Interim) and a high temporal resolution spanning from 1989 to today. These data are, however, not measured at site level, and for this reason a method to downscale and correct the ERA-Interim data is needed. We apply this method to the level 4 data (L4) from the La Thuile collection, freely available after registration under a fair-use policy. The performance of the developed method varies across sites and is also function of the meteorological variable. On average over all sites, applying the bias correction method to the ERA-Interim data reduced the mismatch with the in situ data by 10 to 36 %, depending on the meteorological variable considered. In comparison to the internal variability of the in situ data, the root mean square error (RMSE) between the in situ data and the unbiased ERA-I (ERA-Interim) data remains relatively large (on average over all sites, from 27 to 76 % of the standard deviation of in situ data, depending on the meteorological variable considered). The performance of the method remains poor for the wind speed field, in particular regarding its capacity to conserve a standard deviation similar to the one measured at FLUXNET stations.« less

  3. Filling the gaps in meteorological continuous data measured at FLUXNET sites with ERA-Interim reanalysis

    DOE PAGES

    Vuichard, N.; Papale, D.

    2015-07-13

    In this study, exchanges of carbon, water and energy between the land surface and the atmosphere are monitored by eddy covariance technique at the ecosystem level. Currently, the FLUXNET database contains more than 500 registered sites, and up to 250 of them share data (free fair-use data set). Many modelling groups use the FLUXNET data set for evaluating ecosystem models' performance, but this requires uninterrupted time series for the meteorological variables used as input. Because original in situ data often contain gaps, from very short (few hours) up to relatively long (some months) ones, we develop a new and robustmore » method for filling the gaps in meteorological data measured at site level. Our approach has the benefit of making use of continuous data available globally (ERA-Interim) and a high temporal resolution spanning from 1989 to today. These data are, however, not measured at site level, and for this reason a method to downscale and correct the ERA-Interim data is needed. We apply this method to the level 4 data (L4) from the La Thuile collection, freely available after registration under a fair-use policy. The performance of the developed method varies across sites and is also function of the meteorological variable. On average over all sites, applying the bias correction method to the ERA-Interim data reduced the mismatch with the in situ data by 10 to 36 %, depending on the meteorological variable considered. In comparison to the internal variability of the in situ data, the root mean square error (RMSE) between the in situ data and the unbiased ERA-I (ERA-Interim) data remains relatively large (on average over all sites, from 27 to 76 % of the standard deviation of in situ data, depending on the meteorological variable considered). The performance of the method remains poor for the wind speed field, in particular regarding its capacity to conserve a standard deviation similar to the one measured at FLUXNET stations.« less

  4. A short note on the maximal point-biserial correlation under non-normality.

    PubMed

    Cheng, Ying; Liu, Haiyan

    2016-11-01

    The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.

  5. Introduction to the use of regression models in epidemiology.

    PubMed

    Bender, Ralf

    2009-01-01

    Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

  6. Randomized trial of intermittent or continuous amnioinfusion for variable decelerations.

    PubMed

    Rinehart, B K; Terrone, D A; Barrow, J H; Isler, C M; Barrilleaux, P S; Roberts, W E

    2000-10-01

    To determine whether continuous or intermittent bolus amnioinfusion is more effective in relieving variable decelerations. Patients with repetitive variable decelerations were randomized to an intermittent bolus or continuous amnioinfusion. The intermittent bolus infusion group received boluses of 500 mL of normal saline, each over 30 minutes, with boluses repeated if variable decelerations recurred. The continuous infusion group received a bolus infusion of 500 mL of normal saline over 30 minutes and then 3 mL per minute until delivery occurred. The ability of the amnioinfusion to abolish variable decelerations was analyzed, as were maternal demographic and pregnancy outcome variables. Power analysis indicated that 64 patients would be required. Thirty-five patients were randomized to intermittent infusion and 30 to continuous infusion. There were no differences between groups in terms of maternal demographics, gestational age, delivery mode, neonatal outcome, median time to resolution of variable decelerations, or the number of times variable decelerations recurred. The median volume infused in the intermittent infusion group (500 mL) was significantly less than that in the continuous infusion group (905 mL, P =.003). Intermittent bolus amnioinfusion is as effective as continuous infusion in relieving variable decelerations in labor. Further investigation is necessary to determine whether either of these techniques is associated with increased occurrence of rare complications such as cord prolapse or uterine rupture.

  7. The Relation of Finite Element and Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1976-01-01

    Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.

  8. Method for curing polymers using variable-frequency microwave heating

    DOEpatents

    Lauf, R.J.; Bible, D.W.; Paulauskas, F.L.

    1998-02-24

    A method for curing polymers incorporating a variable frequency microwave furnace system designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity is disclosed. By varying the frequency of the microwave signal, non-uniformities within the cavity are minimized, thereby achieving a more uniform cure throughout the workpiece. A directional coupler is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter is provided for measuring the power delivered to the microwave furnace. A second power meter detects the magnitude of reflected power. The furnace cavity may be adapted to be used to cure materials defining a continuous sheet or which require compressive forces during curing. 15 figs.

  9. Nonparametric methods in actigraphy: An update

    PubMed Central

    Gonçalves, Bruno S.B.; Cavalcanti, Paula R.A.; Tavares, Gracilene R.; Campos, Tania F.; Araujo, John F.

    2014-01-01

    Circadian rhythmicity in humans has been well studied using actigraphy, a method of measuring gross motor movement. As actigraphic technology continues to evolve, it is important for data analysis to keep pace with new variables and features. Our objective is to study the behavior of two variables, interdaily stability and intradaily variability, to describe rest activity rhythm. Simulated data and actigraphy data of humans, rats, and marmosets were used in this study. We modified the method of calculation for IV and IS by modifying the time intervals of analysis. For each variable, we calculated the average value (IVm and ISm) results for each time interval. Simulated data showed that (1) synchronization analysis depends on sample size, and (2) fragmentation is independent of the amplitude of the generated noise. We were able to obtain a significant difference in the fragmentation patterns of stroke patients using an IVm variable, while the variable IV60 was not identified. Rhythmic synchronization of activity and rest was significantly higher in young than adults with Parkinson׳s when using the ISM variable; however, this difference was not seen using IS60. We propose an updated format to calculate rhythmic fragmentation, including two additional optional variables. These alternative methods of nonparametric analysis aim to more precisely detect sleep–wake cycle fragmentation and synchronization. PMID:26483921

  10. Continuously-Variable Positive-Mesh Power Transmission

    NASA Technical Reports Server (NTRS)

    Johnson, J. L.

    1982-01-01

    Proposed transmission with continuously-variable speed ratio couples two mechanical trigonometric-function generators. Transmission is expected to handle higher loads than conventional variable-pulley drives; and, unlike variable pulley, positive traction through entire drive train with no reliance on friction to transmit power. Able to vary speed continuously through zero and into reverse. Possible applications in instrumentation where drive-train slippage cannot be tolerated.

  11. Simulation of an enzyme-based glucose sensor

    NASA Astrophysics Data System (ADS)

    Sha, Xianzheng; Jablecki, Michael; Gough, David A.

    2001-09-01

    An important biosensor application is the continuous monitoring blood or tissue fluid glucose concentration in people with diabetes. Our research focuses on the development of a glucose sensor based on potentiostatic oxygen electrodes and immobilized glucose oxidase for long- term application as an implant in tissues. As the sensor signal depends on many design variables, a trial-and-error approach to sensor optimization can be time-consuming. Here, the properties of an implantable glucose sensor are optimized by a systematic computational simulation approach.

  12. Monte Carlo Study of Cosmic-Ray Propagation in the Galaxy and Diffuse Gamma-Ray Production

    NASA Astrophysics Data System (ADS)

    Huang, C.-Y.; Pohl, M.

    This talk present preliminary results for the time-dependent cosmic-ray propagation in the Galaxy by a fully 3-dimensional Monte Carlo simulation. The distribution of cosmic-rays (both protons and helium nuclei) in the Galaxy is studied on various spatial scales for both constant and variable cosmic-ray sources. The continuous diffuse gamma-ray emission produced by cosmic-rays during the propagation is evaluated. The results will be compared with calculations made with other propagation models.

  13. Numerical solution of problems concerning the thermal convection of a variable-viscosity liquid

    NASA Astrophysics Data System (ADS)

    Zherebiatev, I. F.; Lukianov, A. T.; Podkopaev, Iu. L.

    A stabilizing-correction scheme is constructed for integrating the fourth-order equation describing the dynamics of a viscous incompressible liquid. As an example, a solution is obtained to the problem of the solidification of a liquid in a rectangular region with allowance for convective energy transfer in the liquid phase as well as temperature-dependent changes of viscosity. It is noted that the proposed method can be used to study steady-state problems of thermal convection in ingots obtained through continuous casting.

  14. Identifying and Analyzing Uncertainty Structures in the TRMM Microwave Imager Precipitation Product over Tropical Ocean Basins

    NASA Technical Reports Server (NTRS)

    Liu, Jianbo; Kummerow, Christian D.; Elsaesser, Gregory S.

    2016-01-01

    Despite continuous improvements in microwave sensors and retrieval algorithms, our understanding of precipitation uncertainty is quite limited, due primarily to inconsistent findings in studies that compare satellite estimates to in situ observations over different parts of the world. This study seeks to characterize the temporal and spatial properties of uncertainty in the Tropical Rainfall Measuring Mission Microwave Imager surface rainfall product over tropical ocean basins. Two uncertainty analysis frameworks are introduced to qualitatively evaluate the properties of uncertainty under a hierarchy of spatiotemporal data resolutions. The first framework (i.e. 'climate method') demonstrates that, apart from random errors and regionally dependent biases, a large component of the overall precipitation uncertainty is manifested in cyclical patterns that are closely related to large-scale atmospheric modes of variability. By estimating the magnitudes of major uncertainty sources independently, the climate method is able to explain 45-88% of the monthly uncertainty variability. The percentage is largely resolution dependent (with the lowest percentage explained associated with a 1 deg x 1 deg spatial/1 month temporal resolution, and highest associated with a 3 deg x 3 deg spatial/3 month temporal resolution). The second framework (i.e. 'weather method') explains regional mean precipitation uncertainty as a summation of uncertainties associated with individual precipitation systems. By further assuming that self-similar recurring precipitation systems yield qualitatively comparable precipitation uncertainties, the weather method can consistently resolve about 50 % of the daily uncertainty variability, with only limited dependence on the regions of interest.

  15. Continuous-variable controlled-Z gate using an atomic ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Mingfeng; Jiang Nianquan; Jin Qingli

    2011-06-15

    The continuous-variable controlled-Z gate is a canonical two-mode gate for universal continuous-variable quantum computation. It is considered as one of the most fundamental continuous-variable quantum gates. Here we present a scheme for realizing continuous-variable controlled-Z gate between two optical beams using an atomic ensemble. The gate is performed by simply sending the two beams propagating in two orthogonal directions twice through a spin-squeezed atomic medium. Its fidelity can run up to one if the input atomic state is infinitely squeezed. Considering the noise effects due to atomic decoherence and light losses, we show that the observed fidelities of the schememore » are still quite high within presently available techniques.« less

  16. A meta-analysis of the effects of texting on driving.

    PubMed

    Caird, Jeff K; Johnston, Kate A; Willness, Chelsea R; Asbridge, Mark; Steel, Piers

    2014-10-01

    Text messaging while driving is considered dangerous and known to produce injuries and fatalities. However, the effects of text messaging on driving performance have not been synthesized or summarily estimated. All available experimental studies that measured the effects of text messaging on driving were identified through database searches using variants of "driving" and "texting" without restriction on year of publication through March 2014. Of the 1476 abstracts reviewed, 82 met general inclusion criteria. Of these, 28 studies were found to sufficiently compare reading or typing text messages while driving with a control or baseline condition. Independent variables (text-messaging tasks) were coded as typing, reading, or a combination of both. Dependent variables included eye movements, stimulus detection, reaction time, collisions, lane positioning, speed and headway. Statistics were extracted from studies to compute effect sizes (rc). A total sample of 977 participants from 28 experimental studies yielded 234 effect size estimates of the relationships among independent and dependent variables. Typing and reading text messages while driving adversely affected eye movements, stimulus detection, reaction time, collisions, lane positioning, speed and headway. Typing text messages alone produced similar decrements as typing and reading, whereas reading alone had smaller decrements over fewer dependent variables. Typing and reading text messages affects drivers' capability to adequately direct attention to the roadway, respond to important traffic events, control a vehicle within a lane and maintain speed and headway. This meta-analysis provides convergent evidence that texting compromises the safety of the driver, passengers and other road users. Combined efforts, including legislation, enforcement, blocking technologies, parent modeling, social media, social norms and education, will be required to prevent continued deaths and injuries from texting and driving. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. A statistical model for interpreting computerized dynamic posturography data

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  18. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  19. Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models

    NASA Astrophysics Data System (ADS)

    Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura

    2014-09-01

    Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.

  20. Stronger steerability criterion for more uncertain continuous-variable systems

    NASA Astrophysics Data System (ADS)

    Chowdhury, Priyanka; Pramanik, Tanumoy; Majumdar, A. S.

    2015-10-01

    We derive a fine-grained uncertainty relation for the measurement of two incompatible observables on a single quantum system of continuous variables, and show that continuous-variable systems are more uncertain than discrete-variable systems. Using the derived fine-grained uncertainty relation, we formulate a stronger steering criterion that is able to reveal the steerability of NOON states that has hitherto not been possible using other criteria. We further obtain a monogamy relation for our steering inequality which leads to an, in principle, improved lower bound on the secret key rate of a one-sided device independent quantum key distribution protocol for continuous variables.

  1. Robustness of quantum key distribution with discrete and continuous variables to channel noise

    NASA Astrophysics Data System (ADS)

    Lasota, Mikołaj; Filip, Radim; Usenko, Vladyslav C.

    2017-06-01

    We study the robustness of quantum key distribution protocols using discrete or continuous variables to the channel noise. We introduce the model of such noise based on coupling of the signal to a thermal reservoir, typical for continuous-variable quantum key distribution, to the discrete-variable case. Then we perform a comparison of the bounds on the tolerable channel noise between these two kinds of protocols using the same noise parametrization, in the case of implementation which is perfect otherwise. Obtained results show that continuous-variable protocols can exhibit similar robustness to the channel noise when the transmittance of the channel is relatively high. However, for strong loss discrete-variable protocols are superior and can overcome even the infinite-squeezing continuous-variable protocol while using limited nonclassical resources. The requirement on the probability of a single-photon production which would have to be fulfilled by a practical source of photons in order to demonstrate such superiority is feasible thanks to the recent rapid development in this field.

  2. Investigation of Coastal Hydrogeology Utilizing Geophysical and Geochemical Tools along the Broward County Coast, Florida

    USGS Publications Warehouse

    Reich, Christopher D.; Swarzenski, Peter W.; Greenwood, W. Jason; Wiese, Dana S.

    2008-01-01

    Geophysical (CHIRP, boomer, and continuous direct-current resistivity) and geochemical tracer studies (continuous and time-series 222Radon) were conducted along the Broward County coast from Port Everglades to Hillsboro Inlet, Florida. Simultaneous seismic, direct-current resistivity, and radon surveys in the coastal waters provided information to characterize the geologic framework and identify potential groundwater-discharge sites. Time-series radon at the Nova Southeastern University National Coral Reef Institute (NSU/NCRI) seawall indicated a very strong tidally modulated discharge of ground water with 222Rn activities ranging from 4 to 10 disintegrations per minute per liter depending on tidal stage. CHIRP seismic data provided very detailed bottom profiles (i.e., bathymetry); however, acoustic penetration was poor and resulted in no observed subsurface geologic structure. Boomer data, on the other hand, showed features that are indicative of karst, antecedent topography (buried reefs), and sand-filled troughs. Continuous resistivity profiling (CRP) data showed slight variability in the subsurface along the coast. Subtle changes in subsurface resistivity between nearshore (higher values) and offshore (lower values) profiles may indicate either a freshening of subsurface water nearshore or a change in sediment porosity or lithology. Further lithologic and hydrologic controls from sediment or rock cores or well data are needed to constrain the variability in CRP data.

  3. Influences of operational parameters on phosphorus removal in batch and continuous electrocoagulation process performance.

    PubMed

    Nguyen, Dinh Duc; Yoon, Yong Soo; Bui, Xuan Thanh; Kim, Sung Su; Chang, Soon Woong; Guo, Wenshan; Ngo, Huu Hao

    2017-11-01

    Performance of an electrocoagulation (EC) process in batch and continuous operating modes was thoroughly investigated and evaluated for enhancing wastewater phosphorus removal under various operating conditions, individually or combined with initial phosphorus concentration, wastewater conductivity, current density, and electrolysis times. The results revealed excellent phosphorus removal (72.7-100%) for both processes within 3-6 min of electrolysis, with relatively low energy requirements, i.e., less than 0.5 kWh/m 3 for treated wastewater. However, the removal efficiency of phosphorus in the continuous EC operation mode was better than that in batch mode within the scope of the study. Additionally, the rate and efficiency of phosphorus removal strongly depended on operational parameters, including wastewater conductivity, initial phosphorus concentration, current density, and electrolysis time. Based on experimental data, statistical model verification of the response surface methodology (RSM) (multiple factor optimization) was also established to provide further insights and accurately describe the interactive relationship between the process variables, thus optimizing the EC process performance. The EC process using iron electrodes is promising for improving wastewater phosphorus removal efficiency, and RSM can be a sustainable tool for predicting the performance of the EC process and explaining the influence of the process variables.

  4. Method and apparatus for executing an asynchronous clutch-to-clutch shift in a hybrid transmission

    DOEpatents

    Demirovic, Besim; Gupta, Pinaki; Kaminsky, Lawrence A.; Naqvi, Ali K.; Heap, Anthony H.; Sah, Jy-Jen F.

    2014-08-12

    A hybrid transmission includes first and second electric machines. A method for operating the hybrid transmission in response to a command to execute a shift from an initial continuously variable mode to a target continuously variable mode includes increasing torque of an oncoming clutch associated with operating in the target continuously variable mode and correspondingly decreasing a torque of an off-going clutch associated with operating in the initial continuously variable mode. Upon deactivation of the off-going clutch, torque outputs of the first and second electric machines and the torque of the oncoming clutch are controlled to synchronize the oncoming clutch. Upon synchronization of the oncoming clutch, the torque for the oncoming clutch is increased and the transmission is operated in the target continuously variable mode.

  5. Mock juror sampling issues in jury simulation research: A meta-analysis.

    PubMed

    Bornstein, Brian H; Golding, Jonathan M; Neuschatz, Jeffrey; Kimbrough, Christopher; Reed, Krystia; Magyarics, Casey; Luecht, Katherine

    2017-02-01

    The advantages and disadvantages of jury simulation research have often been debated in the literature. Critics chiefly argue that jury simulations lack verisimilitude, particularly through their use of student mock jurors, and that this limits the generalizabilty of the findings. In the present article, the question of sample differences (student v. nonstudent) in jury research was meta-analyzed for 6 dependent variables: 3 criminal (guilty verdicts, culpability, and sentencing) and 3 civil (liability verdicts, continuous liability, and damages). In total, 53 studies (N = 17,716) were included in the analysis (40 criminal and 13 civil). The results revealed that guilty verdicts, culpability ratings, and damage awards did not vary with sample. Furthermore, the variables that revealed significant or marginally significant differences, sentencing and liability judgments, had small or contradictory effect sizes (e.g., effects on dichotomous and continuous liability judgments were in opposite directions). In addition, with the exception of trial presentation medium, moderator effects were small and inconsistent. These results may help to alleviate concerns regarding the use of student samples in jury simulation research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Extraversion and cardiovascular responses to recurrent social stress: Effect of stress intensity.

    PubMed

    Lü, Wei; Xing, Wanying; Hughes, Brian M; Wang, Zhenhong

    2017-10-28

    The present study sought to establish whether the effects of extraversion on cardiovascular responses to recurrent social stress are contingent on stress intensity. A 2×5×1 mixed-factorial experiment was conducted, with social stress intensity as a between-subject variable, study phase as a within-subject variable, extraversion as a continuous independent variable, and cardiovascular parameter (HR, SBP, DBP, or RSA) as a dependent variable. Extraversion (NEO-FFI), subjective stress, and physiological stress were measured in 166 undergraduate students randomly assigned to undergo moderate (n=82) or high-intensity (n=84) social stress (a public speaking task with different levels of social evaluation). All participants underwent continuous physiological monitoring while facing two consecutive stress exposures distributed across five laboratory phases: baseline, stress exposure 1, post-stress 1, stress exposure 2, post-stress 2. Results indicated that under moderate-intensity social stress, participants higher on extraversion exhibited lesser HR reactivity to stress than participants lower on extraversion, while under high-intensity social stress, they exhibited greater HR, SBP, DBP and RSA reactivity. Under both moderate- and high-intensity social stress, participants higher on extraversion exhibited pronounced SBP and DBP response adaptation to repeated stress, and showed either better degree of HR recovery or greater amount of SBP and DBP recovery after stress. These findings suggest that individuals higher on extraversion exhibit physiological flexibility to cope with social challenges and benefit from adaptive cardiovascular responses. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. An Entropy-Based Measure of Dependence between Two Groups of Random Variables. Research Report. ETS RR-07-20

    ERIC Educational Resources Information Center

    Kong, Nan

    2007-01-01

    In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…

  8. Continuous variables logic via coupled automata using a DNAzyme cascade with feedback† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c6sc03892a Click here for additional data file.

    PubMed Central

    Lilienthal, S.; Klein, M.; Orbach, R.; Willner, I.; Remacle, F.

    2017-01-01

    The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series. PMID:28507669

  9. Toward a Model-Based Approach to the Clinical Assessment of Personality Psychopathology

    PubMed Central

    Eaton, Nicholas R.; Krueger, Robert F.; Docherty, Anna R.; Sponheim, Scott R.

    2015-01-01

    Recent years have witnessed tremendous growth in the scope and sophistication of statistical methods available to explore the latent structure of psychopathology, involving continuous, discrete, and hybrid latent variables. The availability of such methods has fostered optimism that they can facilitate movement from classification primarily crafted through expert consensus to classification derived from empirically-based models of psychopathological variation. The explication of diagnostic constructs with empirically supported structures can then facilitate the development of assessment tools that appropriately characterize these constructs. Our goal in this paper is to illustrate how new statistical methods can inform conceptualization of personality psychopathology and therefore its assessment. We use magical thinking as example, because both theory and earlier empirical work suggested the possibility of discrete aspects to the latent structure of personality psychopathology, particularly forms of psychopathology involving distortions of reality testing, yet other data suggest that personality psychopathology is generally continuous in nature. We directly compared the fit of a variety of latent variable models to magical thinking data from a sample enriched with clinically significant variation in psychotic symptomatology for explanatory purposes. Findings generally suggested a continuous latent variable model best represented magical thinking, but results varied somewhat depending on different indices of model fit. We discuss the implications of the findings for classification and applied personality assessment. We also highlight some limitations of this type of approach that are illustrated by these data, including the importance of substantive interpretation, in addition to use of model fit indices, when evaluating competing structural models. PMID:24007309

  10. Micromechanisms of fatigue crack growth in polycarbonate polyurethane: Time dependent and hydration effects.

    PubMed

    Ford, Audrey C; Gramling, Hannah; Li, Samuel C; Sov, Jessica V; Srinivasan, Amrita; Pruitt, Lisa A

    2018-03-01

    Polycarbonate polyurethane has cartilage-like, hygroscopic, and elastomeric properties that make it an attractive material for orthopedic joint replacement application. However, little data exists on the cyclic loading and fracture behavior of polycarbonate polyurethane. This study investigates the mechanisms of fatigue crack growth in polycarbonate polyurethane with respect to time dependent effects and conditioning. We studied two commercially available polycarbonate polyurethanes, Bionate® 75D and 80A. Tension testing was performed on specimens at variable time points after being removed from hydration and variable strain rates. Fatigue crack propagation characterized three aspects of loading. Study 1 investigated the impact of continuous loading (24h/day) versus intermittent loading (8-10h/day) allowing for relaxation overnight. Study 2 evaluated the effect of frequency and study 3 examined the impact of hydration on the fatigue crack propagation in polycarbonate polyurethane. Samples loaded intermittently failed instantaneously and prematurely upon reloading while samples loaded continuously sustained longer stable cracks. Crack growth for samples tested at 2 and 5Hz was largely planar with little crack deflection. However, samples tested at 10Hz showed high degrees of crack tip deflection and multiple crack fronts. Crack growth in hydrated samples proceeded with much greater ductile crack mouth opening displacement than dry samples. An understanding of the failure mechanisms of this polymer is important to assess the long-term structural integrity of this material for use in load-bearing orthopedic implant applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Long-distance continuous-variable quantum key distribution by controlling excess noise

    NASA Astrophysics Data System (ADS)

    Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua

    2016-01-01

    Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.

  12. Long-distance continuous-variable quantum key distribution by controlling excess noise.

    PubMed

    Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua

    2016-01-13

    Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.

  13. Long-distance continuous-variable quantum key distribution by controlling excess noise

    PubMed Central

    Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua

    2016-01-01

    Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network. PMID:26758727

  14. The Integration of Continuous and Discrete Latent Variable Models: Potential Problems and Promising Opportunities

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Curran, Patrick J.

    2004-01-01

    Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…

  15. Grouped comparisons of sleep quality for new and personal bedding systems.

    PubMed

    Jacobson, Bert H; Wallace, Tia J; Smith, Doug B; Kolb, Tanner

    2008-03-01

    The purpose of this study was to compare sleep comfort and quality between personal and new bedding systems. A convenience sample (women, n=33; men, n=29) with no clinical history of disturbed sleep participated in the study. Subjects recorded back and shoulder pain, sleep quality, comfort, and efficiency for 28 days each in their personal beds (pre) and in new medium-firm bedding systems (post). Repeated measures ANOVAs revealed significant improvement between pre- and post-test means for all dependent variables. Furthermore, reduction of pain and stiffness and improvement of sleep comfort and quality became more prominent over time. No significant differences were found for the groupings of age, weight, height, or body mass index. It was found that for the cheapest category of beds, lower back pain was significantly (p<0.01) more prominent than for the medium and higher priced beds. Average bed age was 9.5yrs. It was concluded that new bedding systems can significantly improve selected sleep variables and that continuous sleep quality may be dependent on timely replacement of bedding systems.

  16. Violation of Bell's Inequality Using Continuous Variable Measurements

    NASA Astrophysics Data System (ADS)

    Thearle, Oliver; Janousek, Jiri; Armstrong, Seiji; Hosseini, Sara; Schünemann Mraz, Melanie; Assad, Syed; Symul, Thomas; James, Matthew R.; Huntington, Elanor; Ralph, Timothy C.; Lam, Ping Koy

    2018-01-01

    A Bell inequality is a fundamental test to rule out local hidden variable model descriptions of correlations between two physically separated systems. There have been a number of experiments in which a Bell inequality has been violated using discrete-variable systems. We demonstrate a violation of Bell's inequality using continuous variable quadrature measurements. By creating a four-mode entangled state with homodyne detection, we recorded a clear violation with a Bell value of B =2.31 ±0.02 . This opens new possibilities for using continuous variable states for device independent quantum protocols.

  17. Temperature and field-dependent transport measurements in continuously tunable tantalum oxide memristors expose the dominant state variable

    NASA Astrophysics Data System (ADS)

    Graves, Catherine E.; Dávila, Noraica; Merced-Grafals, Emmanuelle J.; Lam, Si-Ty; Strachan, John Paul; Williams, R. Stanley

    2017-03-01

    Applications of memristor devices are quickly moving beyond computer memory to areas of analog and neuromorphic computation. These applications require the design of devices with different characteristics from binary memory, such as a large tunable range of conductance. A complete understanding of the conduction mechanisms and their corresponding state variable(s) is crucial for optimizing performance and designs in these applications. Here we present measurements of low bias I-V characteristics of 6 states in a Ta/ tantalum-oxide (TaOx)/Pt memristor spanning over 2 orders of magnitude in conductance and temperatures from 100 K to 500 K. Our measurements show that the 300 K device conduction is dominated by a temperature-insensitive current that varies with non-volatile memristor state, with an additional leakage contribution from a thermally-activated current channel that is nearly independent of the memristor state. We interpret these results with a parallel conduction model of Mott hopping and Schottky emission channels, fitting the voltage and temperature dependent experimental data for all memristor states with only two free parameters. The memristor conductance is linearly correlated with N, the density of electrons near EF participating in the Mott hopping conduction, revealing N to be the dominant state variable for low bias conduction in this system. Finally, we show that the Mott hopping sites can be ascribed to oxygen vacancies, where the local oxygen vacancy density responsible for critical hopping pathways controls the memristor conductance.

  18. Discrete-continuous variable structural synthesis using dual methods

    NASA Technical Reports Server (NTRS)

    Schmit, L. A.; Fleury, C.

    1980-01-01

    Approximation concepts and dual methods are extended to solve structural synthesis problems involving a mix of discrete and continuous sizing type of design variables. Pure discrete and pure continuous variable problems can be handled as special cases. The basic mathematical programming statement of the structural synthesis problem is converted into a sequence of explicit approximate primal problems of separable form. These problems are solved by constructing continuous explicit dual functions, which are maximized subject to simple nonnegativity constraints on the dual variables. A newly devised gradient projection type of algorithm called DUAL 1, which includes special features for handling dual function gradient discontinuities that arise from the discrete primal variables, is used to find the solution of each dual problem. Computational implementation is accomplished by incorporating the DUAL 1 algorithm into the ACCESS 3 program as a new optimizer option. The power of the method set forth is demonstrated by presenting numerical results for several example problems, including a pure discrete variable treatment of a metallic swept wing and a mixed discrete-continuous variable solution for a thin delta wing with fiber composite skins.

  19. Validity of a Residualized Dependent Variable after Pretest Covariance Adjustments: Still the Same Variable?

    ERIC Educational Resources Information Center

    Nimon, Kim; Henson, Robin K.

    2015-01-01

    The authors empirically examined whether the validity of a residualized dependent variable after covariance adjustment is comparable to that of the original variable of interest. When variance of a dependent variable is removed as a result of one or more covariates, the residual variance may not reflect the same meaning. Using the pretest-posttest…

  20. Students’ Covariational Reasoning in Solving Integrals’ Problems

    NASA Astrophysics Data System (ADS)

    Harini, N. V.; Fuad, Y.; Ekawati, R.

    2018-01-01

    Covariational reasoning plays an important role to indicate quantities vary in learning calculus. This study investigates students’ covariational reasoning during their studies concerning two covarying quantities in integral problem. Six undergraduate students were chosen to solve problems that involved interpreting and representing how quantities change in tandem. Interviews were conducted to reveal the students’ reasoning while solving covariational problems. The result emphasizes that undergraduate students were able to construct the relation of dependent variables that changes in tandem with the independent variable. However, students faced difficulty in forming images of continuously changing rates and could not accurately apply the concept of integrals. These findings suggest that learning calculus should be increased emphasis on coordinating images of two quantities changing in tandem about instantaneously rate of change and to promote conceptual knowledge in integral techniques.

  1. Method for curing polymers using variable-frequency microwave heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lauf, R.J.; Bible, D.W.; Paulauskas, F.L.

    1998-02-24

    A method for curing polymers incorporating a variable frequency microwave furnace system designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity is disclosed. By varying the frequency of the microwave signal, non-uniformities within the cavity are minimized, thereby achieving a more uniform cure throughout the workpiece. A directional coupler is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter is provided for measuring the power delivered to the microwave furnace. A second power meter detects the magnitude of reflected power. Themore » furnace cavity may be adapted to be used to cure materials defining a continuous sheet or which require compressive forces during curing. 15 figs.« less

  2. Variable-Domain Functional Regression for Modeling ICU Data.

    PubMed

    Gellar, Jonathan E; Colantuoni, Elizabeth; Needham, Dale M; Crainiceanu, Ciprian M

    2014-12-01

    We introduce a class of scalar-on-function regression models with subject-specific functional predictor domains. The fundamental idea is to consider a bivariate functional parameter that depends both on the functional argument and on the width of the functional predictor domain. Both parametric and nonparametric models are introduced to fit the functional coefficient. The nonparametric model is theoretically and practically invariant to functional support transformation, or support registration. Methods were motivated by and applied to a study of association between daily measures of the Intensive Care Unit (ICU) Sequential Organ Failure Assessment (SOFA) score and two outcomes: in-hospital mortality, and physical impairment at hospital discharge among survivors. Methods are generally applicable to a large number of new studies that record a continuous variables over unequal domains.

  3. Method for curing polymers using variable-frequency microwave heating

    DOEpatents

    Lauf, Robert J.; Bible, Don W.; Paulauskas, Felix L.

    1998-01-01

    A method for curing polymers (11) incorporating a variable frequency microwave furnace system (10) designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity (34). By varying the frequency of the microwave signal, non-uniformities within the cavity (34) are minimized, thereby achieving a more uniform cure throughout the workpiece (36). A directional coupler (24) is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter (30) is provided for measuring the power delivered to the microwave furnace (32). A second power meter (26) detects the magnitude of reflected power. The furnace cavity (34) may be adapted to be used to cure materials defining a continuous sheet or which require compressive forces during curing.

  4. Integrating models that depend on variable data

    NASA Astrophysics Data System (ADS)

    Banks, A. T.; Hill, M. C.

    2016-12-01

    Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.

  5. Predicting continued participation in college chemistry for men and women

    NASA Astrophysics Data System (ADS)

    Deboer, George E.

    The purpose of this study was to test the effectiveness of a cognitive motivational model of course selection patterns to explain the continued participation of men and women in college science courses. A number of cognitive motivational constructs were analyzed in a path model and their effect on students' intention to continue in college chemistry was determined. Variables in the model included self-perceived ability in science, future expectations, level of past success, effort expended, subjective interpretations of both past success and task difficulty, and the intention to continue in college chemistry.The results showed no sex differences in course performance, the plan to continue in chemistry, perceived ability in science, or past achievement in science courses. The path analysis did confirm the usefulness of the cognitive motivational perspective to explain the intention of both men and women to continue in science. Central to that process appears to be a person's belief about their ability. Students who had confidence in their ability in chemistry expected to do well in the future and were more likely to take more chemistry. Ability ratings in turn were dependent on a number of past achievement experiences and the personal interpretation of those experiences.

  6. Simpson's Paradox, Lord's Paradox, and Suppression Effects are the same phenomenon – the reversal paradox

    PubMed Central

    Tu, Yu-Kang; Gunnell, David; Gilthorpe, Mark S

    2008-01-01

    This article discusses three statistical paradoxes that pervade epidemiological research: Simpson's paradox, Lord's paradox, and suppression. These paradoxes have important implications for the interpretation of evidence from observational studies. This article uses hypothetical scenarios to illustrate how the three paradoxes are different manifestations of one phenomenon – the reversal paradox – depending on whether the outcome and explanatory variables are categorical, continuous or a combination of both; this renders the issues and remedies for any one to be similar for all three. Although the three statistical paradoxes occur in different types of variables, they share the same characteristic: the association between two variables can be reversed, diminished, or enhanced when another variable is statistically controlled for. Understanding the concepts and theory behind these paradoxes provides insights into some controversial or contradictory research findings. These paradoxes show that prior knowledge and underlying causal theory play an important role in the statistical modelling of epidemiological data, where incorrect use of statistical models might produce consistent, replicable, yet erroneous results. PMID:18211676

  7. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  8. World food trends and prospects to 2025

    PubMed Central

    Dyson, Tim

    1999-01-01

    This paper reviews food (especially cereal) production trends and prospects for the world and its main regions. Despite fears to the contrary, in recent years we have seen continued progress toward better methods of feeding humanity. Sub-Saharan Africa is the sole major exception. Looking to the future, this paper argues that the continuation of recent cereal yield trends should be sufficient to cope with most of the demographically driven expansion of cereal demand that will occur until the year 2025. However, because of an increasing degree of mismatch between the expansion of regional demand and the potential for supply, there will be a major expansion of world cereal (and noncereal food) trade. Other consequences for global agriculture arising from demographic growth include the need to use water much more efficiently and an even greater dependence on nitrogen fertilizers (e.g., South Asia). Farming everywhere will depend more on information-intensive agricultural management procedures. Moreover, despite continued general progress, there still will be a significant number of undernourished people in 2025. Signs of heightened harvest variability, especially in North America, are of serious concern. Thus, although future general food trends are likely to be positive, in some respects we also could be entering a more volatile world. PMID:10339520

  9. Effect of Temperature on Heart Rate Variability in Neonatal ICU Patients With Hypoxic-Ischemic Encephalopathy.

    PubMed

    Massaro, An N; Campbell, Heather E; Metzler, Marina; Al-Shargabi, Tareq; Wang, Yunfei; du Plessis, Adre; Govindan, Rathinaswamy B

    2017-04-01

    To determine whether measures of heart rate variability are related to changes in temperature during rewarming after therapeutic hypothermia for hypoxic-ischemic encephalopathy. Prospective observational study. Level 4 neonatal ICU in a free-standing academic children's hospital. Forty-four infants with moderate to severe hypoxic-ischemic encephalopathy treated with therapeutic hypothermia. Continuous electrocardiogram data from 2 hours prior to rewarming through 2 hours after completion of rewarming (up to 10 hr) were analyzed. Median beat-to-beat interval and measures of heart rate variability were quantified including beat-to-beat interval SD, low and high frequency relative spectral power, detrended fluctuation analysis short and long α exponents (αS and αL), and root mean square short and long time scales. The relationships between heart rate variability measures and esophageal/axillary temperatures were evaluated. Heart rate variability measures low frequency, αS, and root mean square short and long time scales were negatively associated, whereas αL was positively associated, with temperature (p < 0.01). These findings signify an overall decrease in heart rate variability as temperature increased toward normothermia. Measures of heart rate variability are temperature dependent in the range of therapeutic hypothermia to normothermia. Core body temperature needs to be considered when evaluating heart rate variability metrics as potential physiologic biomarkers of illness severity in hypoxic-ischemic encephalopathy infants undergoing therapeutic hypothermia.

  10. Temporal Dynamics and Persistence of Spatial Patterns: from Groundwater to Soil Moisture to Transpiration

    NASA Astrophysics Data System (ADS)

    Blume, T.; Hassler, S. K.; Weiler, M.

    2017-12-01

    Hydrological science still struggles with the fact that while we wish for spatially continuous images or movies of state variables and fluxes at the landscape scale, most of our direct measurements are point measurements. To date regional measurements resolving landscape scale patterns can only be obtained by remote sensing methods, with the common drawback that they remain near the earth surface and that temporal resolution is generally low. However, distributed monitoring networks at the landscape scale provide the opportunity for detailed and time-continuous pattern exploration. Even though measurements are spatially discontinuous, the large number of sampling points and experimental setups specifically designed for the purpose of landscape pattern investigation open up new avenues of regional hydrological analyses. The CAOS hydrological observatory in Luxembourg offers a unique setup to investigate questions of temporal stability, pattern evolution and persistence of certain states. The experimental setup consists of 45 sensor clusters. These sensor clusters cover three different geologies, two land use classes, five different landscape positions, and contrasting aspects. At each of these sensor clusters three soil moisture/soil temperature profiles, basic climate variables, sapflow, shallow groundwater, and stream water levels were measured continuously for the past 4 years. We will focus on characteristic landscape patterns of various hydrological state variables and fluxes, studying their temporal stability on the one hand and the dependence of patterns on hydrological states on the other hand (e.g. wet vs dry). This is extended to time-continuous pattern analysis based on time series of spatial rank correlation coefficients. Analyses focus on the absolute values of soil moisture, soil temperature, groundwater levels and sapflow, but also investigate the spatial pattern of the daily changes of these variables. The analysis aims at identifying hydrologic signatures of the processes or landscape characteristics acting as major controls. While groundwater, soil water and transpiration are closely linked by the water cycle, they are controlled by different processes and we expect this to be reflected in interlinked but not necessarily congruent patterns and responses.

  11. In operando neutron diffraction study of the temperature and current rate-dependent phase evolution of LiFePO4 in a commercial battery

    NASA Astrophysics Data System (ADS)

    Sharma, N.; Yu, D. H.; Zhu, Y.; Wu, Y.; Peterson, V. K.

    2017-02-01

    In operando NPD data of electrodes in lithium-ion batteries reveal unusual LiFePO4 phase evolution after the application of a thermal step and at high current. At low current under ambient conditions the LiFePO4 to FePO4 two-phase reaction occurs during the charge process, however, following a thermal step and at higher current this reaction appears at the end of charge and continues into the next electrochemical step. The same behavior is observed for the FePO4 to LiFePO4 transition, occurring at the end of discharge and continuing into the following electrochemical step. This suggests that the bulk (or the majority of the) electrode transformation is dependent on the battery's history, current, or temperature. Such information concerning the non-equilibrium evolution of an electrode allows a direct link between the electrode's functional mechanism that underpins lithium-ion battery behavior and the real-life operating conditions of the battery, such as variable temperature and current, to be made.

  12. A Geometrical Framework for Covariance Matrices of Continuous and Categorical Variables

    ERIC Educational Resources Information Center

    Vernizzi, Graziano; Nakai, Miki

    2015-01-01

    It is well known that a categorical random variable can be represented geometrically by a simplex. Accordingly, several measures of association between categorical variables have been proposed and discussed in the literature. Moreover, the standard definitions of covariance and correlation coefficient for continuous random variables have been…

  13. Continuous-Variable Triple-Photon States Quantum Entanglement

    NASA Astrophysics Data System (ADS)

    González, E. A. Rojas; Borne, A.; Boulanger, B.; Levenson, J. A.; Bencheikh, K.

    2018-01-01

    We investigate the quantum entanglement of the three modes associated with the three-photon states obtained by triple-photon generation in a phase-matched third-order nonlinear optical interaction. Although the second-order processes have been extensively dealt with, there is no direct analogy between the second and third-order mechanisms. We show, for example, the absence of quantum entanglement between the quadratures of the three modes in the case of spontaneous parametric triple-photon generation. However, we show robust, seeding-dependent, genuine triple-photon entanglement in the fully seeded case.

  14. Reduced Order Models Via Continued Fractions Applied to Control Systems,

    DTIC Science & Technology

    1980-09-01

    a simple * model of a nuclear reactor power generator [20, 21]. The heat generating process of a nuclear reactor is dependent upon the mechanism...called fission (a fragmentation of matter). The power generated by this process is directly related to the population of neutrons, n~t) and can be...150) 6(t ()n~t) - c(t) (151) where 6k(t) 6 kc(t)-an(t) (152) The variable 6k(t) is the input to the process and is given the name "reactivity". It is

  15. A Survey of Power Electronics Applications in Aerospace Technologies

    NASA Technical Reports Server (NTRS)

    Kankam, M. David; Elbuluk, Malik E.

    2001-01-01

    The insertion of power electronics in aerospace technologies is becoming widespread. The application of semiconductor devices and electronic converters, as summarized in this paper, includes the International Space Station, satellite power system, and motor drives in 'more electric' technology applied to aircraft, starter/generators and reusable launch vehicles. Flywheels, servo systems embodying electromechanical actuation, and spacecraft on-board electric propulsion are discussed. Continued inroad by power electronics depends on resolving incompatibility of using variable frequency for 400 Hz-operated aircraft equipment. Dual-use electronic modules should reduce system development cost.

  16. Continuous-Variable Triple-Photon States Quantum Entanglement.

    PubMed

    González, E A Rojas; Borne, A; Boulanger, B; Levenson, J A; Bencheikh, K

    2018-01-26

    We investigate the quantum entanglement of the three modes associated with the three-photon states obtained by triple-photon generation in a phase-matched third-order nonlinear optical interaction. Although the second-order processes have been extensively dealt with, there is no direct analogy between the second and third-order mechanisms. We show, for example, the absence of quantum entanglement between the quadratures of the three modes in the case of spontaneous parametric triple-photon generation. However, we show robust, seeding-dependent, genuine triple-photon entanglement in the fully seeded case.

  17. Mixture Factor Analysis for Approximating a Nonnormally Distributed Continuous Latent Factor with Continuous and Dichotomous Observed Variables

    ERIC Educational Resources Information Center

    Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo

    2012-01-01

    Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…

  18. Variable Coupling Scheme for High Frequency Electron Spin Resonance Resonators Using Asymmetric Meshes

    PubMed Central

    Tipikin, D. S.; Earle, K. A.; Freed, J. H.

    2010-01-01

    The sensitivity of a high frequency electron spin resonance (ESR) spectrometer depends strongly on the structure used to couple the incident millimeter wave to the sample that generates the ESR signal. Subsequent coupling of the ESR signal to the detection arm of the spectrometer is also a crucial consideration for achieving high spectrometer sensitivity. In previous work, we found that a means for continuously varying the coupling was necessary for attaining high sensitivity reliably and reproducibly. We report here on a novel asymmetric mesh structure that achieves continuously variable coupling by rotating the mesh in its own plane about the millimeter wave transmission line optical axis. We quantify the performance of this device with nitroxide spin-label spectra in both a lossy aqueous solution and a low loss solid state system. These two systems have very different coupling requirements and are representative of the range of coupling achievable with this technique. Lossy systems in particular are a demanding test of the achievable sensitivity and allow us to assess the suitability of this approach for applying high frequency ESR to the study of biological systems at physiological conditions, for example. The variable coupling technique reported on here allows us to readily achieve a factor of ca. 7 improvement in signal to noise at 170 GHz and a factor of ca. 5 at 95 GHz over what has previously been reported for lossy samples. PMID:20458356

  19. Optimization of controlled release nanoparticle formulation of verapamil hydrochloride using artificial neural networks with genetic algorithm and response surface methodology.

    PubMed

    Li, Yongqiang; Abbaspour, Mohammadreza R; Grootendorst, Paul V; Rauth, Andrew M; Wu, Xiao Yu

    2015-08-01

    This study was performed to optimize the formulation of polymer-lipid hybrid nanoparticles (PLN) for the delivery of an ionic water-soluble drug, verapamil hydrochloride (VRP) and to investigate the roles of formulation factors. Modeling and optimization were conducted based on a spherical central composite design. Three formulation factors, i.e., weight ratio of drug to lipid (X1), and concentrations of Tween 80 (X2) and Pluronic F68 (X3), were chosen as independent variables. Drug loading efficiency (Y1) and mean particle size (Y2) of PLN were selected as dependent variables. The predictive performance of artificial neural networks (ANN) and the response surface methodology (RSM) were compared. As ANN was found to exhibit better recognition and generalization capability over RSM, multi-objective optimization of PLN was then conducted based upon the validated ANN models and continuous genetic algorithms (GA). The optimal PLN possess a high drug loading efficiency (92.4%, w/w) and a small mean particle size (∼100nm). The predicted response variables matched well with the observed results. The three formulation factors exhibited different effects on the properties of PLN. ANN in coordination with continuous GA represent an effective and efficient approach to optimize the PLN formulation of VRP with desired properties. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. The solution of three-variable duct-flow equations

    NASA Technical Reports Server (NTRS)

    Stuart, A. R.; Hetherington, R.

    1974-01-01

    This paper establishes a numerical method for the solution of three-variable problems and is applied here to rotational flows through ducts of various cross sections. An iterative scheme is developed, the main feature of which is the addition of a duplicate variable to the forward component of velocity. Two forward components of velocity result from integrating two sets of first order ordinary differential equations for the streamline curvatures, in intersecting directions across the duct. Two pseudo-continuity equations are introduced with source/sink terms, whose strengths are dependent on the difference between the forward components of velocity. When convergence is obtained, the two forward components of velocity are identical, the source/sink terms are zero, and the original equations are satisfied. A computer program solves the exact equations and boundary conditions numerically. The method is economical and compares successfully with experiments on bent ducts of circular and rectangular cross section where secondary flows are caused by gradients of total pressure upstream.

  1. MONET, HET and SALT and asteroseismological observations and theory in Göttingen

    NASA Astrophysics Data System (ADS)

    Schuh, S.; Hessman, F. V.; Dreizler, S.; Kollatschny, W.; Glatzel, W.

    2007-06-01

    The Göttingen stellar astrophysics group, headed by Stefan Dreizler, conducts research on extrasolar planets and their host stars, on lower-main sequence stars, and on evolved compact objects, in particular hot white dwarfs (including PG 1159 objects, magnetic WDs and cataclysmic variables), and subdwarf B stars. In addition to sophisticated NLTE spectral analyses of these stars, which draw on the extensive stellar atmosphere modelling experience of the group, we actively develop and apply a variety of photometric monitoring and time-resolved spectroscopic techniques to address time-dependent phenomena. With the new instrumentational developments described below, we plan to continue the study of variable white dwarfs (GW Vir, DB and ZZ Ceti variables) and in particular sdB EC 14026 and PG 1617 pulsators which already constitute a main focus, partly within the Whole Earth Telescope (WET/DARC), http://www.physics.udel.edu/~jlp/darc/) collaboration, on a new level. Additional interest is directed towards strange mode instabilities in Wolf Rayet stars.

  2. Waist circumference, body mass index, and employment outcomes.

    PubMed

    Kinge, Jonas Minet

    2017-07-01

    Body mass index (BMI) is an imperfect measure of body fat. Recent studies provide evidence in favor of replacing BMI with waist circumference (WC). Hence, I investigated whether or not the association between fat mass and employment status vary by anthropometric measures. I used 15 rounds of the Health Survey for England (1998-2013), which has measures of employment status in addition to measured height, weight, and WC. WC and BMI were entered as continuous variables and obesity as binary variables defined using both WC and BMI. I used multivariate models controlling for a set of covariates. The association of WC with employment was of greater magnitude than the association between BMI and employment. I reran the analysis using conventional instrumental variables methods. The IV models showed significant impacts of obesity on employment; however, they were not more pronounced when WC was used to measure obesity, compared to BMI. This means that, in the IV models, the impact of fat mass on employment did not depend on the measure of fat mass.

  3. Analysis of isothermal and cooling rate dependent immersion freezing by a unifying stochastic ice nucleation model

    NASA Astrophysics Data System (ADS)

    Alpert, P. A.; Knopf, D. A.

    2015-05-01

    Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature (T) and relative humidity (RH) at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling rate dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nuclei (IN) all have the same IN surface area (ISA), however the validity of this assumption or the impact it may have on analysis and interpretation of the experimental data is rarely questioned. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses physically observable parameters including the total number of droplets (Ntot) and the heterogeneous ice nucleation rate coefficient, Jhet(T). This model is applied to address if (i) a time and ISA dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time dependent isothermal frozen fractions exhibiting non-exponential behavior with time can be readily explained by this model considering varying ISA. An apparent cooling rate dependence ofJhet is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. The model simulations allow for a quantitative experimental uncertainty analysis for parameters Ntot, T, RH, and the ISA variability. In an idealized cloud parcel model applying variability in ISAs for each droplet, the model predicts enhanced immersion freezing temperatures and greater ice crystal production compared to a case when ISAs are uniform in each droplet. The implications of our results for experimental analysis and interpretation of the immersion freezing process are discussed.

  4. Analysis of isothermal and cooling-rate-dependent immersion freezing by a unifying stochastic ice nucleation model

    NASA Astrophysics Data System (ADS)

    Alpert, Peter A.; Knopf, Daniel A.

    2016-02-01

    Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature, T, and relative humidity, RH, at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling-rate-dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nucleating particles (INPs) all have the same INP surface area (ISA); however, the validity of this assumption or the impact it may have on analysis and interpretation of the experimental data is rarely questioned. Descriptions of ice active sites and variability of contact angles have been successfully formulated to describe ice nucleation experimental data in previous research; however, we consider the ability of a stochastic freezing model founded on classical nucleation theory to reproduce previous results and to explain experimental uncertainties and data scatter. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses parameters including the total number of droplets, Ntot, and the heterogeneous ice nucleation rate coefficient, Jhet(T). This model is applied to address if (i) a time and ISA-dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous-flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time-dependent isothermal frozen fractions exhibiting non-exponential behavior can be readily explained by this model considering varying ISA. An apparent cooling-rate dependence of Jhet is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling-rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. The model simulations allow for a quantitative experimental uncertainty analysis for parameters Ntot, T, RH, and the ISA variability. The implications of our results for experimental analysis and interpretation of the immersion freezing process are discussed.

  5. Comparison of environmental forcings affecting suspended sediments variability in two macrotidal, highly-turbid estuaries

    NASA Astrophysics Data System (ADS)

    Jalón-Rojas, Isabel; Schmidt, Sabine; Sottolichio, Aldo

    2017-11-01

    The relative contribution of environmental forcing frequencies on turbidity variability is, for the first time, quantified at seasonal and multiannual time scales in tidal estuarine systems. With a decade of high-frequency, multi-site turbidity monitoring, the two nearby, macrotidal and highly-turbid Gironde and Loire estuaries (west France) are excellent natural laboratories for this purpose. Singular Spectrum Analyses, combined with Lomb-Scargle periodograms and Wavelet Transforms, were applied to the continuous multiannual turbidity time series. Frequencies of the main environmental factors affecting turbidity were identified: hydrological regime (high versus low river discharges), river flow variability, tidal range, tidal cycles, and turbulence. Their relative influences show similar patterns in both estuaries and depend on the estuarine region (lower or upper estuary) and the time scale (multiannual or seasonal). On the multiannual time scale, the relative contribution of tidal frequencies (tidal cycles and range) to turbidity variability decreases up-estuary from 68% to 47%, while the influence of river flow frequencies increases from 3% to 42%. On the seasonal time scale, the relative influence of forcings frequencies remains almost constant in the lower estuary, dominated by tidal frequencies (60% and 30% for tidal cycles and tidal range, respectively); in the upper reaches, it is variable depending on hydrological regime, even if tidal frequencies are responsible for up 50% of turbidity variance. These quantifications show the potential of combined spectral analyses to compare the behavior of suspended sediment in tidal estuaries throughout the world and to evaluate long-term changes in environmental forcings, especially in a context of global change. The relevance of this approach to compare nearby and overseas systems and to support management strategies is discussed (e.g., selection of effective operation frequencies/regions, prediction of the most affected regions by the implementation of operational management plans).

  6. Neighbouring populations, opposite dynamics: influence of body size and environmental variation on the demography of stream-resident brown trout (Salmo trutta).

    PubMed

    Fernández-Chacón, Albert; Genovart, Meritxell; Álvarez, David; Cano, José M; Ojanguren, Alfredo F; Rodriguez-Muñoz, Rolando; Nicieza, Alfredo G

    2015-06-01

    In organisms such as fish, where body size is considered an important state variable for the study of their population dynamics, size-specific growth and survival rates can be influenced by local variation in both biotic and abiotic factors, but few studies have evaluated the complex relationships between environmental variability and size-dependent processes. We analysed a 6-year capture-recapture dataset of brown trout (Salmo trutta) collected at 3 neighbouring but heterogeneous mountain streams in northern Spain with the aim of investigating the factors shaping the dynamics of local populations. The influence of body size and water temperature on survival and individual growth was assessed under a multi-state modelling framework, an extension of classical capture-recapture models that considers the state (i.e. body size) of the individual in each capture occasion and allows us to obtain state-specific demographic rates and link them to continuous environmental variables. Individual survival and growth patterns varied over space and time, and evidence of size-dependent survival was found in all but the smallest stream. At this stream, the probability of reaching larger sizes was lower compared to the other wider and deeper streams. Water temperature variables performed better in the modelling of the highest-altitude population, explaining over a 99 % of the variability in maturation transitions and survival of large fish. The relationships between body size, temperature and fitness components found in this study highlight the utility of multi-state approaches to investigate small-scale demographic processes in heterogeneous environments, and to provide reliable ecological knowledge for management purposes.

  7. Seasonal emanation of radon at Ghuttu, northwest Himalaya: Differentiation of atmospheric temperature and pressure influences.

    PubMed

    Kamra, Leena

    2015-11-01

    Continuous monitoring of radon along with meteorological parameters has been carried out in a seismically active area of Garhwal region, northwest Himalaya, within the frame work of earthquake precursory research. Radon measurements are carried out by using a gamma ray detector installed in the air column at a depth of 10m in a 68m deep borehole. The analysis of long time series for 2006-2012 shows strong seasonal variability masked by diurnal and multi-day variations. Isolation of a seasonal cycle by minimising short-time by 31 day running average shows a strong seasonal variation with unambiguous dependence on atmospheric temperature and pressure. The seasonal characteristics of radon concentrations are positively correlated to atmospheric temperature (R=0.95) and negatively correlated to atmospheric pressure (R=-0.82). The temperature and pressure variation in their annual progressions are negatively correlated. The calculations of partial correlation coefficient permit us to conclude that atmospheric temperature plays a dominant role in controlling the variability of radon in borehole, 71% of the variability in radon arises from the variation in atmospheric temperature and about 6% of the variability is contributed by atmospheric pressure. The influence of pressure variations in an annual cycle appears to be a pseudo-effect, resulting from the negative correlation between temperature and pressure variations. Incorporation of these results explains the varying and even contradictory claims regarding the influence of the pressure variability on radon changes in the published literature. Temperature dependence, facilitated by the temperature gradient in the borehole, controls the transportation of radon from the deep interior to the surface. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. International migration beyond gravity: A statistical model for use in population projections

    PubMed Central

    Cohen, Joel E.; Roig, Marta; Reuman, Daniel C.; GoGwilt, Cai

    2008-01-01

    International migration will play an increasing role in the demographic future of most nations if fertility continues to decline globally. We developed an algorithm to project future numbers of international migrants from any country or region to any other. The proposed generalized linear model (GLM) used geographic and demographic independent variables only (the population and area of origins and destinations of migrants, the distance between origin and destination, the calendar year, and indicator variables to quantify nonrandom characteristics of individual countries). The dependent variable, yearly numbers of migrants, was quantified by 43653 reports from 11 countries of migration from 228 origins and to 195 destinations during 1960–2004. The final GLM based on all data was selected by the Bayesian information criterion. The number of migrants per year from origin to destination was proportional to (population of origin)0.86(area of origin)−0.21(population of destination)0.36(distance)−0.97, multiplied by functions of year and country-specific indicator variables. The number of emigrants from an origin depended on both its population and its population density. For a variable initial year and a fixed terminal year 2004, the parameter estimates appeared stable. Multiple R2, the fraction of variation in log numbers of migrants accounted for by the starting model, improved gradually with recentness of the data: R2 = 0.57 for data from 1960 to 2004, R2 = 0.59 for 1985–2004, R2 = 0.61 for 1995–2004, and R2 = 0.64 for 2000–2004. The migration estimates generated by the model may be embedded in deterministic or stochastic population projections. PMID:18824693

  9. Cognitive plasticity as a moderator of functional dependency in elderly patients hospitalized for bone fractures.

    PubMed

    Calero-García, M J; Calero, M D; Navarro, E; Ortega, A R

    2015-01-01

    Bone fractures in older adults involve hospitalization and surgical intervention, aspects that have been related to loss of autonomy and independence. Several variables have been studied as moderators of how these patients recover. However, the implications of cognitive plasticity for functional recovery have not been studied to date. The present study analyzes the relationship between cognitive plasticity--defined as the capacity for learning or improved performance under conditions of training or performance optimization--and functional recovery in older adults hospitalized following a bone fracture. The study comprised 165 older adults who underwent surgery for bone fractures at a hospital in southern Spain. Participants were evaluated at different time points thereafter, with instruments that measure activities of daily life (ADL), namely the Barthel Index (BI) and the Lawton Index, as well as with a learning potential (cognitive plasticity) assessment test (Auditory Verbal Learning Test of Learning Potential, AVLT-LP). Results show that most of the participants have improved their level of independence 3 months after the intervention. However, some patients continue to have medium to high levels of dependency and this dependency is related to cognitive plasticity. The results of this study reveal the importance of the cognitive plasticity variable for evaluating older adults hospitalized for a fracture. They indicate a possible benefit to be obtained by implementing programs that reduce the degree of long-term dependency or decrease the likelihood of it arising.

  10. Smoking and Illicit Drug Use Associations With Early Versus Delayed Reproduction: Findings in a Young Adult Cohort of Australian Twins*

    PubMed Central

    Waldron, Mary; Heath, Andrew C.; Lynskey, Michael T.; Nelson, Elliot C.; Bucholz, Kathleen K.; Madden, Pamela A.F.; Martin, Nicholas G.

    2009-01-01

    Objective: This article examines relationships between reproductive onset and lifetime history of smoking, regular smoking, and nicotine dependence, and cannabis and other illicit drug use. Method: Data were drawn from a young adult cohort of 3,386 female and 2,751 male Australian twins born between 1964 and 1971. Survival analyses were conducted using Cox proportional hazards regression models predicting age at first childbirth from history of substance use or disorder separately by substance class. Other substance use or disorder, including alcohol dependence, as well as sociodemographic characteristics, history of psychopathology, and family and childhood risks, were included as control variables in adjusted models. Results: Regular smoking and nicotine dependence were associated with earlier reproduction, with pronounced effects for women. For women, use of cannabis was associated with early reproduction before age 20, and with delayed reproduction among women who have not reproduced by age 20 or 25. Adjustment for control variables only partially explained these associations. Conclusions: Consistent with research linking adolescent use with sexual risk taking predictive of early childbearing, regular smokers and nicotine-dependent individuals show earlier reproductive onset. In contrast, delays in childbearing associated with use of cannabis are consistent with impairments in reproductive ability and/or opportunities for reproduction. Continued research on risks both upstream and downstream of substance-use initiation and onset of substance-use disorder is needed for causal mechanisms to be fully understood. PMID:19737504

  11. Continuation Power Flow with Variable-Step Variable-Order Nonlinear Predictor

    NASA Astrophysics Data System (ADS)

    Kojima, Takayuki; Mori, Hiroyuki

    This paper proposes a new continuation power flow calculation method for drawing a P-V curve in power systems. The continuation power flow calculation successively evaluates power flow solutions through changing a specified value of the power flow calculation. In recent years, power system operators are quite concerned with voltage instability due to the appearance of deregulated and competitive power markets. The continuation power flow calculation plays an important role to understand the load characteristics in a sense of static voltage instability. In this paper, a new continuation power flow with a variable-step variable-order (VSVO) nonlinear predictor is proposed. The proposed method evaluates optimal predicted points confirming with the feature of P-V curves. The proposed method is successfully applied to IEEE 118-bus and IEEE 300-bus systems.

  12. An Algorithm for the Mixed Transportation Network Design Problem

    PubMed Central

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803

  13. Continuous direct compression as manufacturing platform for sustained release tablets.

    PubMed

    Van Snick, B; Holman, J; Cunningham, C; Kumar, A; Vercruysse, J; De Beer, T; Remon, J P; Vervaet, C

    2017-03-15

    This study presents a framework for process and product development on a continuous direct compression manufacturing platform. A challenging sustained release formulation with high content of a poorly flowing low density drug was selected. Two HPMC grades were evaluated as matrix former: standard Methocel CR and directly compressible Methocel DC2. The feeding behavior of each formulation component was investigated by deriving feed factor profiles. The maximum feed factor was used to estimate the drive command and depended strongly upon the density of the material. Furthermore, the shape of the feed factor profile allowed definition of a customized refill regime for each material. Inline NIRs was used to estimate the residence time distribution (RTD) in the mixer and monitor blend uniformity. Tablet content and weight variability were determined as additional measures of mixing performance. For Methocel CR, the best axial mixing (i.e. feeder fluctuation dampening) was achieved when an impeller with high number of radial mixing blades operated at low speed. However, the variability in tablet weight and content uniformity deteriorated under this condition. One can therefore conclude that balancing axial mixing with tablet quality is critical for Methocel CR. However, reformulating with the direct compressible Methocel DC2 as matrix former improved tablet quality vastly. Furthermore, both process and product were significantly more robust to changes in process and design variables. This observation underpins the importance of flowability during continuous blending and die-filling. At the compaction stage, blends with Methocel CR showed better tabletability driven by a higher compressibility as the smaller CR particles have a higher bonding area. However, tablets of similar strength were achieved using Methocel DC2 by targeting equal porosity. Compaction pressure impacted tablet properties and dissolution. Hence controlling thickness during continuous manufacturing of sustained release tablets was crucial to ensure reproducible dissolution. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Times of Maximum Light: The Passband Dependence

    NASA Astrophysics Data System (ADS)

    Joner, M. D.; Laney, C. D.

    2004-05-01

    We present UBVRIJHK light curves for the dwarf Cepheid variable star AD Canis Minoris. These data are part of a larger project to determine absolute magnitudes for this class of stars. Our figures clearly show changes in the times of maximum light, the amplitude, and the light curve morphology that are dependent on the passband used in the observation. Note that when data from a variety of passbands are used in studies that require a period analysis or that search for small changes in the pulsational period, it is easy to introduce significant systematic errors into the results. We thank the Brigham Young University Department of Physics and Astronmy for continued support of our research. We also acknowledge the South African Astronomical Observatory for time granted to this project.

  15. [Correlation coefficient-based classification method of hydrological dependence variability: With auto-regression model as example].

    PubMed

    Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.

  16. Are your covariates under control? How normalization can re-introduce covariate effects.

    PubMed

    Pain, Oliver; Dudbridge, Frank; Ronald, Angelica

    2018-04-30

    Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.

  17. How do doctors choose where they want to work? - motives for choice of current workplace among physicians registered in Finland 1977-2006.

    PubMed

    Heikkilä, Teppo Juhani; Hyppölä, Harri; Aine, Tiina; Halila, Hannu; Vänskä, Jukka; Kujala, Santero; Virjo, Irma; Mattila, Kari

    2014-02-01

    Though there are a number of studies investigating the career choices of physicians, there are only few concerning doctors' choices of workplace. A random sample (N=7758) of physicians licensed in Finland during the years 1977-2006 was surveyed. Respondents were asked: "To what extent did the following motives affect your choice of your current workplace?" Respondents were grouped based on several background variables. The groups were used as independent variables in univariate analysis of covariance (ANCOVA). The factors Good workplace, Career and professional development, Non-work related issues, Personal contacts and Salary were formed and used as dependent variables. There were significant differences between groups of physicians, especially in terms of gender, working sector and specialties. The association of Good workplace, Career and professional development, and Non-work related issues with the choice of a workplace significantly decreased with age. Female physicians were more concerned with Career and professional development and Non-work related issues. Since more females are entering the medical profession and there is an ongoing change of generations, health care organizations and policy makers need to develop a new philosophy in order to attract physicians. This will need to include more human-centric management and leadership, better possibilities for continuous professional development, and more personalized working arrangements depending on physician's personal motives. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Curve Number Application in Continuous Runoff Models: An Exercise in Futility?

    NASA Astrophysics Data System (ADS)

    Lamont, S. J.; Eli, R. N.

    2006-12-01

    The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a surrogate for the selected subset of HPSF parameters could not be justified. These results suggest that use of the Curve Number in other complex continuous time series hydrologic models may not be appropriate, given the limitations inherent in the definition of the NRCS CN method.

  19. Mathematical modelling of methanogenic reactor start-up: Importance of volatile fatty acids degrading population.

    PubMed

    Jabłoński, Sławomir J; Łukaszewicz, Marcin

    2014-12-01

    Development of balanced community of microorganisms is one of the obligatory for stable anaerobic digestion. Application of mathematical models might be helpful in development of reliable procedures during the process start-up period. Yet, the accuracy of forecast depends on the quality of input and parameters. In this study, the specific anaerobic activity (SAA) tests were applied in order to estimate microbial community structure. Obtained data was applied as input conditions for mathematical model of anaerobic digestion. The initial values of variables describing the amount of acetate and propionate utilizing microorganisms could be calculated on the basis of SAA results. The modelling based on those optimized variables could successfully reproduce the behavior of a real system during the continuous fermentation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. The Causal Effects of Father Absence

    PubMed Central

    McLanahan, Sara; Tach, Laura; Schneider, Daniel

    2014-01-01

    The literature on father absence is frequently criticized for its use of cross-sectional data and methods that fail to take account of possible omitted variable bias and reverse causality. We review studies that have responded to this critique by employing a variety of innovative research designs to identify the causal effect of father absence, including studies using lagged dependent variable models, growth curve models, individual fixed effects models, sibling fixed effects models, natural experiments, and propensity score matching models. Our assessment is that studies using more rigorous designs continue to find negative effects of father absence on offspring well-being, although the magnitude of these effects is smaller than what is found using traditional cross-sectional designs. The evidence is strongest and most consistent for outcomes such as high school graduation, children’s social-emotional adjustment, and adult mental health. PMID:24489431

  1. Optimization of a GO2/GH2 Swirl Coaxial Injector Element

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar

    1999-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) swirl coaxial injector element. The element is optimized in terms of design variables such as fuel pressure drop, DELTA P(sub f), oxidizer pressure drop, DELTA P(sub 0) combustor length, L(sub comb), and full cone swirl angle, theta, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w) injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 180 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Two examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio.

  2. On the dynamic rounding-off in analogue and RF optimal circuit sizing

    NASA Astrophysics Data System (ADS)

    Kotti, Mouna; Fakhfakh, Mourad; Fino, Maria Helena

    2014-04-01

    Frequently used approaches to solve discrete multivariable optimisation problems consist of computing solutions using a continuous optimisation technique. Then, using heuristics, the variables are rounded-off to their nearest available discrete values to obtain a discrete solution. Indeed, in many engineering problems, and particularly in analogue circuit design, component values, such as the geometric dimensions of the transistors, the number of fingers in an integrated capacitor or the number of turns in an integrated inductor, cannot be chosen arbitrarily since they have to obey to some technology sizing constraints. However, rounding-off the variables values a posteriori and can lead to infeasible solutions (solutions that are located too close to the feasible solution frontier) or degradation of the obtained results (expulsion from the neighbourhood of a 'sharp' optimum) depending on how the added perturbation affects the solution. Discrete optimisation techniques, such as the dynamic rounding-off technique (DRO) are, therefore, needed to overcome the previously mentioned situation. In this paper, we deal with an improvement of the DRO technique. We propose a particle swarm optimisation (PSO)-based DRO technique, and we show, via some analog and RF-examples, the necessity to implement such a routine into continuous optimisation algorithms.

  3. Can we "predict" long-term outcome for ambulatory transcutaneous electrical nerve stimulation in patients with chronic pain?

    PubMed

    Köke, Albère J; Smeets, Rob J E M; Perez, Roberto S; Kessels, Alphons; Winkens, Bjorn; van Kleef, Maarten; Patijn, Jacob

    2015-03-01

    Evidence for effectiveness of transcutaneous electrical nerve stimulation (TENS) is still inconclusive. As heterogeneity of chronic pain patients might be an important factor for this lack of efficacy, identifying factors for a successful long-term outcome is of great importance. A prospective study was performed to identify variables with potential predictive value for 2 outcome measures on long term (6 months); (1) continuation of TENS, and (2) a minimally clinical important pain reduction of ≥ 33%. At baseline, a set of risk factors including pain-related variables, psychological factors, and disability was measured. In a multiple logistic regression analysis, higher patient's expectations, neuropathic pain, no severe pain (< 80 mm visual analogue scale [VAS]) were independently related to long-term continuation of TENS. For the outcome "minimally clinical important pain reduction," the multiple logistic regression analysis indicated that no multisited pain (> 2 pain locations) and intermittent pain were positively and independently associated with a minimally clinical important pain reduction of ≥ 33%. The results showed that factors associated with a successful outcome in the long term are dependent on definition of successful outcome. © 2014 World Institute of Pain.

  4. Analysis of continuous-time switching networks

    NASA Astrophysics Data System (ADS)

    Edwards, R.

    2000-11-01

    Models of a number of biological systems, including gene regulation and neural networks, can be formulated as switching networks, in which the interactions between the variables depend strongly on thresholds. An idealized class of such networks in which the switching takes the form of Heaviside step functions but variables still change continuously in time has been proposed as a useful simplification to gain analytic insight. These networks, called here Glass networks after their originator, are simple enough mathematically to allow significant analysis without restricting the range of dynamics found in analogous smooth systems. A number of results have been obtained before, particularly regarding existence and stability of periodic orbits in such networks, but important cases were not considered. Here we present a coherent method of analysis that summarizes previous work and fills in some of the gaps as well as including some new results. Furthermore, we apply this analysis to a number of examples, including surprising long and complex limit cycles involving sequences of hundreds of threshold transitions. Finally, we show how the above methods can be extended to investigate aperiodic behaviour in specific networks, though a complete analysis will have to await new results in matrix theory and symbolic dynamics.

  5. Identifying demographic variables related to failed dental appointments in a university hospital-based residency program.

    PubMed

    Mathu-Muju, Kavita R; Li, Hsin-Fang; Hicks, James; Nash, David A; Kaplan, Alan; Bush, Heather M

    2014-01-01

    The objective of this study was to identify characteristics of pediatric patients who failed to keep the majority of their scheduled dental appointments in a pediatric dental clinic staffed by pediatric dental residents and faculty members. The electronic records of all patients appointed over a continuous 54 month period were analyzed. Appointment history and demographic variables were collected. The rate of failed appointments was calculated by dividing the number of failed appointments with the total number of appointments scheduled for the patient. There were 7,591 patients in the analyzable dataset scheduled with a total of 48,932 appointments. Factors associated with an increased rate of failed appointments included self-paying for dental care, having a resident versus a faculty member as the provider, rural residence, and adolescent aged patients. Multivariable regression models indicated self-paying patients had higher odds and rates of failed appointments than patients with Medicaid and private insurance. Access to care for children may be improved by increasing the availability of private and public insurance. The establishment of a dental home and its relationship to a child receiving continuous care in an institutional setting depends upon establishing a relationship with a specific dentist.

  6. Comparison of non-invasive MRI measurements of cerebral blood flow in a large multisite cohort.

    PubMed

    Dolui, Sudipto; Wang, Ze; Wang, Danny Jj; Mattay, Raghav; Finkel, Mack; Elliott, Mark; Desiderio, Lisa; Inglis, Ben; Mueller, Bryon; Stafford, Randall B; Launer, Lenore J; Jacobs, David R; Bryan, R Nick; Detre, John A

    2016-07-01

    Arterial spin labeling and phase contrast magnetic resonance imaging provide independent non-invasive methods for measuring cerebral blood flow. We compared global cerebral blood flow measurements obtained using pseudo-continuous arterial spin labeling and phase contrast in 436 middle-aged subjects acquired at two sites in the NHLBI CARDIA multisite study. Cerebral blood flow measured by phase contrast (CBFPC: 55.76 ± 12.05 ml/100 g/min) was systematically higher (p < 0.001) and more variable than cerebral blood flow measured by pseudo-continuous arterial spin labeling (CBFPCASL: 47.70 ± 9.75). The correlation between global cerebral blood flow values obtained from the two modalities was 0.59 (p < 0.001), explaining less than half of the observed variance in cerebral blood flow estimates. Well-established correlations of global cerebral blood flow with age and sex were similarly observed in both CBFPCASL and CBFPC CBFPC also demonstrated statistically significant site differences, whereas no such differences were observed in CBFPCASL No consistent velocity-dependent effects on pseudo-continuous arterial spin labeling were observed, suggesting that pseudo-continuous labeling efficiency does not vary substantially across typical adult carotid and vertebral velocities, as has previously been suggested. Although CBFPCASL and CBFPC values show substantial similarity across the entire cohort, these data do not support calibration of CBFPCASL using CBFPC in individual subjects. The wide-ranging cerebral blood flow values obtained by both methods suggest that cerebral blood flow values are highly variable in the general population. © The Author(s) 2016.

  7. A unified dislocation density-dependent physical-based constitutive model for cold metal forming

    NASA Astrophysics Data System (ADS)

    Schacht, K.; Motaman, A. H.; Prahl, U.; Bleck, W.

    2017-10-01

    Dislocation-density-dependent physical-based constitutive models of metal plasticity while are computationally efficient and history-dependent, can accurately account for varying process parameters such as strain, strain rate and temperature; different loading modes such as continuous deformation, creep and relaxation; microscopic metallurgical processes; and varying chemical composition within an alloy family. Since these models are founded on essential phenomena dominating the deformation, they have a larger range of usability and validity. Also, they are suitable for manufacturing chain simulations since they can efficiently compute the cumulative effect of the various manufacturing processes by following the material state through the entire manufacturing chain and also interpass periods and give a realistic prediction of the material behavior and final product properties. In the physical-based constitutive model of cold metal plasticity introduced in this study, physical processes influencing cold and warm plastic deformation in polycrystalline metals are described using physical/metallurgical internal variables such as dislocation density and effective grain size. The evolution of these internal variables are calculated using adequate equations that describe the physical processes dominating the material behavior during cold plastic deformation. For validation, the model is numerically implemented in general implicit isotropic elasto-viscoplasticity algorithm as a user-defined material subroutine (UMAT) in ABAQUS/Standard and used for finite element simulation of upsetting tests and a complete cold forging cycle of case hardenable MnCr steel family.

  8. Inhibitory effects of losartan and azelnidipine on augmentation of blood pressure variability induced by angiotensin II in rats.

    PubMed

    Jiang, Danfeng; Kawagoe, Yukiko; Kuwasako, Kenji; Kitamura, Kazuo; Kato, Johji

    2017-07-05

    Increased blood pressure variability has been shown to be associated with cardiovascular morbidity and mortality. Recently we reported that continuous infusion of angiotensin II not only elevated blood pressure level, but also increased blood pressure variability in a manner assumed to be independent of blood pressure elevation in rats. In the present study, the effects of the angiotensin type I receptor blocker losartan and the calcium channel blocker azelnidipine on angiotensin II-induced blood pressure variability were examined and compared with that of the vasodilator hydralazine in rats. Nine-week-old male Wistar rats were subcutaneously infused with 240 pmol/kg/min angiotensin II for two weeks without or with oral administration of losartan, azelnidipine, or hydralazine. Blood pressure variability was evaluated using a coefficient of variation of blood pressure recorded every 15min under an unrestrained condition via an abdominal aortic catheter by a radiotelemetry system. Treatment with losartan suppressed both blood pressure elevation and augmentation of systolic blood pressure variability in rats infused with angiotensin II at 7 and 14 days. Azelnidipine also inhibited angiotensin II-induced blood pressure elevation and augmentation of blood pressure variability; meanwhile, hydralazine attenuated the pressor effect of angiotensin II, but had no effect on blood pressure variability. In conclusion, angiotensin II augmented blood pressure variability in an angiotensin type 1 receptor-dependent manner, and azelnidipine suppressed angiotensin II-induced augmentation of blood pressure variability, an effect mediated by the mechanism independent of the blood pressure-lowering action. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Optimization of the transmission of observable expectation values and observable statistics in continuous-variable teleportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albano Farias, L.; Stephany, J.

    2010-12-15

    We analyze the statistics of observables in continuous-variable (CV) quantum teleportation in the formalism of the characteristic function. We derive expressions for average values of output-state observables, in particular, cumulants which are additive in terms of the input state and the resource of teleportation. Working with a general class of teleportation resources, the squeezed-bell-like states, which may be optimized in a free parameter for better teleportation performance, we discuss the relation between resources optimal for fidelity and those optimal for different observable averages. We obtain the values of the free parameter of the squeezed-bell-like states which optimize the central momentamore » and cumulants up to fourth order. For the cumulants the distortion between in and out states due to teleportation depends only on the resource. We obtain optimal parameters {Delta}{sub (2)}{sup opt} and {Delta}{sub (4)}{sup opt} for the second- and fourth-order cumulants, which do not depend on the squeezing of the resource. The second-order central momenta, which are equal to the second-order cumulants, and the photon number average are also optimized by the resource with {Delta}{sub (2)}{sup opt}. We show that the optimal fidelity resource, which has been found previously to depend on the characteristics of input, approaches for high squeezing to the resource that optimizes the second-order momenta. A similar behavior is obtained for the resource that optimizes the photon statistics, which is treated here using the sum of the squared differences in photon probabilities of input versus output states as the distortion measure. This is interpreted naturally to mean that the distortions associated with second-order momenta dominate the behavior of the output state for large squeezing of the resource. Optimal fidelity resources and optimal photon statistics resources are compared, and it is shown that for mixtures of Fock states both resources are equivalent.« less

  10. Eddy correlation measurements of size-dependent cloud droplet turbulent fluxes to complex terrain

    NASA Astrophysics Data System (ADS)

    Vong, Richard J.; Kowalski, Andrew S.

    1995-07-01

    An eddy correlation technique was used to measure the turbulent flux of cloud droplets to complex, forested terrain near the coast of Washington State during the spring of 1993. Excellent agreement was achieved for cloud liquid water content measured by two instruments. Substantial downward liquid water fluxes of ~ 1mm per 24 h were measured at night during "steady and continuous" cloud events, about twice the magnitude of those measured by Beswick etal. in Scotland. Cloud water chemical fluxes were estimated to represent up to 50% of the chemical deposition associated with precipitation at the site. An observed size-dependence in the turbulent liquid water fluxes suggested that both droplet impaction, which leads to downward fluxes, and phase change processes, which can lead to upward fluxes, consistently are important contributors to the eddy correlation results. The diameter below which phase change processes were important to observed fluxes was shown to depend upon σLL, the relative standard deviation of the liquid water content (LWC) within a 30-min averaging period. The crossover from upward to downward LW flux occurs at 8µm for steady and continuous cloud events but at ~ 13µm for events with a larger degree of LWC variability. This comparison of the two types of cloud events suggested that evaporation was the most likely cause of upward droplet fluxes for the smaller droplets (dia<13µm) during cloud with variable LWC (σLL>0.3).

  11. Trait-based Modeling Reveals How Plankton Biodiversity-Ecosystem Function (BEF) Relationships Depend on Environmental Variability

    NASA Astrophysics Data System (ADS)

    Smith, S. L.; Chen, B.; Vallina, S. M.

    2017-12-01

    Biodiversity-Ecosystem Function (BEF) relationships, which are most commonly quantified in terms of productivity or total biomass yield, are known to depend on the timescale of the experiment or field study, both for terrestrial plants and phytoplankton, which have each been widely studied as model ecosystems. Although many BEF relationships are positive (i.e., increasing biodiversity enhances function), in some cases there is an optimal intermediate diversity level (i.e., a uni-modal relationship), and in other cases productivity decreases with certain measures of biodiversity. These differences in BEF relationships cannot be reconciled merely by differences in the timescale of experiments. We will present results from simulation experiments applying recently developed trait-based models of phytoplankton communities and ecosystems, using the `adaptive dynamics' framework to represent continuous distributions of size and other key functional traits. Controlled simulation experiments were conducted with different levels of phytoplankton size-diversity, which through trait-size correlations implicitly represents functional-diversity. One recent study applied a theoretical box model for idealized simulations at different frequencies of disturbance. This revealed how the shapes of BEF relationships depend systematically on the frequency of disturbance and associated nutrient supply. We will also present more recent results obtained using a trait-based plankton ecosystem model embedded in a three-dimensional ocean model applied to the North Pacific. This reveals essentially the same pattern in a spatially explicit model with more realistic environmental forcing. In the relatively more variable subarctic, productivity tends to increase with the size (and hence functional) diversity of phytoplankton, whereas productivity tends to decrease slightly with increasing size-diversity in the relatively calm subtropics. Continuous trait-based models can capture essential features of BEF relationships, while requiring far fewer calculations compared to typical plankton diversity models that explicitly simulate a great many idealized species.

  12. Testing quantum contextuality of continuous-variable states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKeown, Gerard; Paternostro, Mauro; Paris, Matteo G. A.

    2011-06-15

    We investigate the violation of noncontextuality by a class of continuous-variable states, including variations of entangled coherent states and a two-mode continuous superposition of coherent states. We generalize the Kochen-Specker (KS) inequality discussed by Cabello [A. Cabello, Phys. Rev. Lett. 101, 210401 (2008)] by using effective bidimensional observables implemented through physical operations acting on continuous-variable states, in a way similar to an approach to the falsification of Bell-Clauser-Horne-Shimony-Holt inequalities put forward recently. We test for state-independent violation of KS inequalities under variable degrees of state entanglement and mixedness. We then demonstrate theoretically the violation of a KS inequality for anymore » two-mode state by using pseudospin observables and a generalized quasiprobability function.« less

  13. CAN'T MISS--conquer any number task by making important statistics simple. Part 1. Types of variables, mean, median, variance, and standard deviation.

    PubMed

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 1, presents basic information about data including a classification system that describes the four major types of variables: continuous quantitative variable, discrete quantitative variable, ordinal categorical variable (including the binomial variable), and nominal categorical variable. A histogram is a graph that displays the frequency distribution for a continuous variable. The article also demonstrates how to calculate the mean, median, standard deviation, and variance for a continuous variable.

  14. Infrared Space Observatory (ISO) Key Project: the Birth and Death of Planets

    NASA Technical Reports Server (NTRS)

    Stencel, Robert E.; Creech-Eakman, Michelle; Fajardo-Acosta, Sergio; Backman, Dana

    1999-01-01

    This program was designed to continue to analyze observations of stars thought to be forming protoplanets, using the European Space Agency's Infrared Space Observatory, ISO, as one of NASA Key Projects with ISO. A particular class of Infrared Astronomy Satellite (IRAS) discovered stars, known after the prototype, Vega, are principal targets for these observations aimed at examining the evidence for processes involved in forming, or failing to form, planetary systems around other stars. In addition, this program continued to provide partial support for related science in the WIRE, SOFIA and Space Infrared Telescope Facility (SIRTF) projects, plus approved ISO supplementary time observations under programs MCREE1 29 and VEGADMAP. Their goals include time dependent changes in SWS spectra of Long Period Variable stars and PHOT P32 mapping experiments of recognized protoplanetary disk candidate stars.

  15. Cardiopulmonary disease in newborns: a study in continuing medical education.

    PubMed

    Weinberg, A D; McNamara, D G; Christiansen, C H; Taylor, F M; Armitage, M

    1979-03-01

    A film emphasizing the importance of tachypnea as an early manifestation of congenital heart disease was shown to physicians and nurses at 27 hospitals as part of their regular continuing medical education activities. To evaluate the effects of the program, investigators developed a pretest-posttest design which included a nonequivalent control group. Pretest and posttest data were obtained through chart audit of referrals from subjects in experimental and control groups. Dependent variables used to test the hypothesis included the age at which infants were referred and the age at which tachypnea was noted. Analysis of the data yielded significant gain scores for the experimental group, while changes in the control group were not significant. The findings indicate that a need-oriented educational program can have a measurable impact on improving the quality of patient care.

  16. Statistical analyses of soil properties on a quaternary terrace sequence in the upper sava river valley, Slovenia, Yugoslavia

    USGS Publications Warehouse

    Vidic, N.; Pavich, M.; Lobnik, F.

    1991-01-01

    Alpine glaciations, climatic changes and tectonic movements have created a Quaternary sequence of gravely carbonate sediments in the upper Sava River Valley, Slovenia, Yugoslavia. The names for terraces, assigned in this model, Gu??nz, Mindel, Riss and Wu??rm in order of decreasing age, are used as morphostratigraphic terms. Soil chronosequence on the terraces was examined to evaluate which soil properties are time dependent and can be used to help constrain the ages of glaciofluvial sedimentation. Soil thickness, thickness of Bt horizons, amount and continuity of clay coatings and amount of Fe and Me concretions increase with soil age. The main source of variability consists of solutions of carbonate, leaching of basic cations and acidification of soils, which are time dependent and increase with the age of soils. The second source of variability is the content of organic matter, which is less time dependent, but varies more within soil profiles. Textural changes are significant, presented by solution of carbonate pebbles and sand, and formation is silt loam matrix, which with age becomes finer, with clay loam or clayey texture. The oldest, Gu??nz, terrace shows slight deviation from general progressive trends of changes of soil properties with time. The hypothesis of single versus multiple depositional periods of deposition was tested with one-way analysis of variance (ANOVA) on a staggered, nested hierarchical sampling design on a terrace of largest extent and greatest gravel volume, the Wu??rm terrace. The variability of soil properties is generally higher within subareas than between areas of the terrace, except for the soil thickness. Observed differences in soil thickness between the areas of the terrace could be due to multiple periods of gravel deposition, or to the initial differences of texture of the deposits. ?? 1991.

  17. Virtual continuity of measurable functions and its applications

    NASA Astrophysics Data System (ADS)

    Vershik, A. M.; Zatitskii, P. B.; Petrov, F. V.

    2014-12-01

    A classical theorem of Luzin states that a measurable function of one real variable is `almost' continuous. For measurable functions of several variables the analogous statement (continuity on a product of sets having almost full measure) does not hold in general. The search for a correct analogue of Luzin's theorem leads to a notion of virtually continuous functions of several variables. This apparently new notion implicitly appears in the statements of embedding theorems and trace theorems for Sobolev spaces. In fact it reveals the nature of such theorems as statements about virtual continuity. The authors' results imply that under the conditions of Sobolev theorems there is a well-defined integration of a function with respect to a wide class of singular measures, including measures concentrated on submanifolds. The notion of virtual continuity is also used for the classification of measurable functions of several variables and in some questions on dynamical systems, the theory of polymorphisms, and bistochastic measures. In this paper the necessary definitions and properties of admissible metrics are recalled, several definitions of virtual continuity are given, and some applications are discussed. Bibliography: 24 titles.

  18. Recent changes and drivers of the atmospheric evaporative demand in the Canary Islands

    NASA Astrophysics Data System (ADS)

    Vicente-Serrano, Sergio M.; Azorin-Molina, Cesar; Sanchez-Lorenzo, Arturo; El Kenawy, Ahmed; Martín-Hernández, Natalia; Peña-Gallardo, Marina; Beguería, Santiago; Tomas-Burguera, Miquel

    2016-08-01

    We analysed recent evolution and meteorological drivers of the atmospheric evaporative demand (AED) in the Canary Islands for the period 1961-2013. We employed long and high-quality time series of meteorological variables to analyse current AED changes in this region and found that AED has increased during the investigated period. Overall, the annual ETo, which was estimated by means of the FAO-56 Penman-Monteith equation, increased significantly by 18.2 mm decade-1 on average, with a stronger trend in summer (6.7 mm decade-1). In this study we analysed the contribution of (i) the aerodynamic (related to the water vapour that a parcel of air can store) and (ii) radiative (related to the available energy to evaporate a quantity of water) components to the decadal variability and trends of ETo. More than 90 % of the observed ETo variability at the seasonal and annual scales can be associated with the variability in the aerodynamic component. The variable that recorded more significant changes in the Canary Islands was relative humidity, and among the different meteorological factors used to calculate ETo, relative humidity was the main driver of the observed ETo trends. The observed trend could have negative consequences in a number of water-depending sectors if it continues in the future.

  19. Patient Continued Use of Online Health Care Communities: Web Mining of Patient-Doctor Communication

    PubMed Central

    2018-01-01

    Background In practice, online health communities have passed the adoption stage and reached the diffusion phase of development. In this phase, patients equipped with knowledge regarding the issues involved in health care are capable of switching between different communities to maximize their online health community activities. Online health communities employ doctors to answer patient questions, and high quality online health communities are more likely to be acknowledged by patients. Therefore, the factors that motivate patients to maintain ongoing relationships with online health communities must be addressed. However, this has received limited scholarly attention. Objective The purpose of this study was to identify the factors that drive patients to continue their use of online health communities where doctor-patient communication occurs. This was achieved by integrating the information system success model with online health community features. Methods A Web spider was used to download and extract data from one of the most authoritative Chinese online health communities in which communication occurs between doctors and patients. The time span analyzed in this study was from January 2017 to March 2017. A sample of 469 valid anonymous patients with 9667 posts was obtained (the equivalent of 469 respondents in survey research). A combination of Web mining and structural equation modeling was then conducted to test the research hypotheses. Results The results show that the research framework for integrating the information system success model and online health community features contributes to our understanding of the factors that drive patients' relationships with online health communities. The primary findings are as follows: (1) perceived usefulness is found to be significantly determined by three exogenous variables (ie, social support, information quality, and service quality; R2=0.88). These variables explain 87.6% of the variance in perceived usefulness of online health communities; (2) similarly, patient satisfaction was found to be significantly determined by the three variables listed above (R2=0.69). These variables explain 69.3% of the variance seen in patient satisfaction; (3) continuance use (dependent variable) is significantly influenced by perceived usefulness and patient satisfaction (R2=0.93). That is, the combined effects of perceived usefulness and patient satisfaction explain 93.4% of the variance seen in continuance use; and (4) unexpectedly, individual literacy had no influence on perceived usefulness and satisfaction of patients using online health communities. Conclusions First, this study contributes to the existing literature on the continuance use of online health communities using an empirical approach. Second, an appropriate metric was developed to assess constructs related to the proposed research model. Additionally, a Web spider enabled us to acquire objective data relatively easily and frequently, thereby overcoming a major limitation of survey techniques. PMID:29661747

  20. A Framework for Dynamic Constraint Reasoning Using Procedural Constraints

    NASA Technical Reports Server (NTRS)

    Jonsson, Ari K.; Frank, Jeremy D.

    1999-01-01

    Many complex real-world decision and control problems contain an underlying constraint reasoning problem. This is particularly evident in a recently developed approach to planning, where almost all planning decisions are represented by constrained variables. This translates a significant part of the planning problem into a constraint network whose consistency determines the validity of the plan candidate. Since higher-level choices about control actions can add or remove variables and constraints, the underlying constraint network is invariably highly dynamic. Arbitrary domain-dependent constraints may be added to the constraint network and the constraint reasoning mechanism must be able to handle such constraints effectively. Additionally, real problems often require handling constraints over continuous variables. These requirements present a number of significant challenges for a constraint reasoning mechanism. In this paper, we introduce a general framework for handling dynamic constraint networks with real-valued variables, by using procedures to represent and effectively reason about general constraints. The framework is based on a sound theoretical foundation, and can be proven to be sound and complete under well-defined conditions. Furthermore, the framework provides hybrid reasoning capabilities, as alternative solution methods like mathematical programming can be incorporated into the framework, in the form of procedures.

  1. Collaborative Research: Robust Climate Projections and Stochastic Stability of Dynamical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilya Zaliapin

    This project focused on conceptual exploration of El Nino/Southern Oscillation (ENSO) variability and sensitivity using a Delay Differential Equation developed in the project. We have (i) established the existence and continuous dependence of solutions of the model (ii) explored multiple models solutions, and the distribution of solutions extrema, and (iii) established and explored the phase locking phenomenon and the existence of multiple solutions for the same values of model parameters. In addition, we have applied to our model the concept of pullback attractor, which greatly facilitated predictive understanding of the nonlinear model's behavior.

  2. Robust control of a parallel hybrid drivetrain with a CVT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, T.; Schroeder, D.

    1996-09-01

    In this paper the design of a robust control system for a parallel hybrid drivetrain is presented. The drivetrain is based on a continuously variable transmission (CVT) and is therefore a highly nonlinear multiple-input-multiple-output system (MIMO-System). Input-Output-Linearization offers the possibility of linearizing and of decoupling the system. Since for example the vehicle mass varies with the load and the efficiency of the gearbox depends strongly on the actual working point, an exact linearization of the plant will mostly fail. Therefore a robust control algorithm based on sliding mode is used to control the drivetrain.

  3. Density waves in granular flow

    NASA Astrophysics Data System (ADS)

    Herrmann, H. J.; Flekkøy, E.; Nagel, K.; Peng, G.; Ristow, G.

    Ample experimental evidence has shown the existence of spontaneous density waves in granular material flowing through pipes or hoppers. Using Molecular Dynamics Simulations we show that several types of waves exist and find that these density fluctuations follow a 1/f spectrum. We compare this behaviour to deterministic one-dimensional traffic models. If positions and velocities are continuous variables the model shows self-organized criticality driven by the slowest car. We also present Lattice Gas and Boltzmann Lattice Models which reproduce the experimentally observed effects. Density waves are spontaneously generated when the viscosity has a nonlinear dependence on density which characterizes granular flow.

  4. A Bayesian Semiparametric Latent Variable Model for Mixed Responses

    ERIC Educational Resources Information Center

    Fahrmeir, Ludwig; Raach, Alexander

    2007-01-01

    In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…

  5. Optimization of a GO2/GH2 Impinging Injector Element

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar

    2001-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) impinging injector element. The unlike impinging element, a fuel-oxidizer- fuel (F-O-F) triplet, is optimized in terms of design variables such as fuel pressure drop, (Delta)P(sub f), oxidizer pressure drop, (Delta)P(sub o), combustor length, L(sub comb), and impingement half-angle, alpha, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 163 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface which includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, specific variable weights are further increased to illustrate the high marginal cost of realizing the last increment of injector performance and thruster weight.

  6. Regression Methods for Categorical Dependent Variables: Effects on a Model of Student College Choice

    ERIC Educational Resources Information Center

    Rapp, Kelly E.

    2012-01-01

    The use of categorical dependent variables with the classical linear regression model (CLRM) violates many of the model's assumptions and may result in biased estimates (Long, 1997; O'Connell, Goldstein, Rogers, & Peng, 2008). Many dependent variables of interest to educational researchers (e.g., professorial rank, educational attainment) are…

  7. A stochastic hybrid systems based framework for modeling dependent failure processes

    PubMed Central

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313

  8. A stochastic hybrid systems based framework for modeling dependent failure processes.

    PubMed

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.

  9. A model for the progressive failure of laminated composite structural components

    NASA Technical Reports Server (NTRS)

    Allen, D. H.; Lo, D. C.

    1991-01-01

    Laminated continuous fiber polymeric composites are capable of sustaining substantial load induced microstructural damage prior to component failure. Because this damage eventually leads to catastrophic failure, it is essential to capture the mechanics of progressive damage in any cogent life prediction model. For the past several years the authors have been developing one solution approach to this problem. In this approach the mechanics of matrix cracking and delamination are accounted for via locally averaged internal variables which account for the kinematics of microcracking. Damage progression is predicted by using phenomenologically based damage evolution laws which depend on the load history. The result is a nonlinear and path dependent constitutive model which has previously been implemented to a finite element computer code for analysis of structural components. Using an appropriate failure model, this algorithm can be used to predict component life. In this paper the model will be utilized to demonstrate the ability to predict the load path dependence of the damage and stresses in plates subjected to fatigue loading.

  10. Integrating Ecosystem Carbon Dynamics into State-and-Transition Simulation Models of Land Use/Land Cover Change

    NASA Astrophysics Data System (ADS)

    Sleeter, B. M.; Daniel, C.; Frid, L.; Fortin, M. J.

    2016-12-01

    State-and-transition simulation models (STSMs) provide a general approach for incorporating uncertainty into forecasts of landscape change. Using a Monte Carlo approach, STSMs generate spatially-explicit projections of the state of a landscape based upon probabilistic transitions defined between states. While STSMs are based on the basic principles of Markov chains, they have additional properties that make them applicable to a wide range of questions and types of landscapes. A current limitation of STSMs is that they are only able to track the fate of discrete state variables, such as land use/land cover (LULC) classes. There are some landscape modelling questions, however, for which continuous state variables - for example carbon biomass - are also required. Here we present a new approach for integrating continuous state variables into spatially-explicit STSMs. Specifically we allow any number of continuous state variables to be defined for each spatial cell in our simulations; the value of each continuous variable is then simulated forward in discrete time as a stochastic process based upon defined rates of change between variables. These rates can be defined as a function of the realized states and transitions of each cell in the STSM, thus providing a connection between the continuous variables and the dynamics of the landscape. We demonstrate this new approach by (1) developing a simple IPCC Tier 3 compliant model of ecosystem carbon biomass, where the continuous state variables are defined as terrestrial carbon biomass pools and the rates of change as carbon fluxes between pools, and (2) integrating this carbon model with an existing LULC change model for the state of Hawaii, USA.

  11. Development of a statistical model for the determination of the probability of riverbank erosion in a Meditteranean river basin

    NASA Astrophysics Data System (ADS)

    Varouchakis, Emmanouil; Kourgialas, Nektarios; Karatzas, George; Giannakis, Georgios; Lilli, Maria; Nikolaidis, Nikolaos

    2014-05-01

    Riverbank erosion affects the river morphology and the local habitat and results in riparian land loss, damage to property and infrastructures, ultimately weakening flood defences. An important issue concerning riverbank erosion is the identification of the areas vulnerable to erosion, as it allows for predicting changes and assists with stream management and restoration. One way to predict the vulnerable to erosion areas is to determine the erosion probability by identifying the underlying relations between riverbank erosion and the geomorphological and/or hydrological variables that prevent or stimulate erosion. A statistical model for evaluating the probability of erosion based on a series of independent local variables and by using logistic regression is developed in this work. The main variables affecting erosion are vegetation index (stability), the presence or absence of meanders, bank material (classification), stream power, bank height, river bank slope, riverbed slope, cross section width and water velocities (Luppi et al. 2009). In statistics, logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable, e.g. binary response, based on one or more predictor variables (continuous or categorical). The probabilities of the possible outcomes are modelled as a function of independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. 1 = "presence of erosion" and 0 = "no erosion") for any value of the independent variables. The regression coefficients are estimated by using maximum likelihood estimation. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested (Atkinson et al. 2003). The developed statistical model is applied to the Koiliaris River Basin in the island of Crete, Greece. The aim is to determine the probability of erosion along the Koiliaris' riverbanks considering a series of independent geomorphological and/or hydrological variables. Data for the river bank slope and for the river cross section width are available at ten locations along the river. The riverbank has indications of erosion at six of the ten locations while four has remained stable. Based on a recent work, measurements for the two independent variables and data regarding bank stability are available at eight different locations along the river. These locations were used as validation points for the proposed statistical model. The results show a very close agreement between the observed erosion indications and the statistical model as the probability of erosion was accurately predicted at seven out of the eight locations. The next step is to apply the model at more locations along the riverbanks. In November 2013, stakes were inserted at selected locations in order to be able to identify the presence or absence of erosion after the winter period. In April 2014 the presence or absence of erosion will be identified and the model results will be compared to the field data. Our intent is to extend the model by increasing the number of independent variables in order to indentify the key factors favouring erosion along the Koiliaris River. We aim at developing an easy to use statistical tool that will provide a quantified measure of the erosion probability along the riverbanks, which could consequently be used to prevent erosion and flooding events. Atkinson, P. M., German, S. E., Sear, D. A. and Clark, M. J. 2003. Exploring the relations between riverbank erosion and geomorphological controls using geographically weighted logistic regression. Geographical Analysis, 35 (1), 58-82. Luppi, L., Rinaldi, M., Teruggi, L. B., Darby, S. E. and Nardi, L. 2009. Monitoring and numerical modelling of riverbank erosion processes: A case study along the Cecina River (central Italy). Earth Surface Processes and Landforms, 34 (4), 530-546. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.

  12. Frequency-dependent ultrasound-induced transformation in E. coli.

    PubMed

    Deeks, Jeremy; Windmill, James; Agbeze-Onuma, Maduka; Kalin, Robert M; Argondizza, Peter; Knapp, Charles W

    2014-12-01

    Ultrasound-enhanced gene transfer (UEGT) is continuing to gain interest across many disciplines; however, very few studies investigate UEGT efficiency across a range of frequencies. Using a variable frequency generator, UEGT was tested in E. coli at six ultrasonic frequencies. Results indicate frequency can significantly influence UEGT efficiency positively and negatively. A frequency of 61 kHz improved UEGT efficiency by ~70 % higher, but 99 kHz impeded UEGT to an extent worse than no ultrasound exposure. The other four frequencies (26, 133, 174, and 190 kHz) enhanced transformation compared to no ultrasound, but efficiencies did not vary. The influence of frequency on UEGT efficiency was observed across a range of operating frequencies. It is plausible that frequency-dependent dynamics of mechanical and chemical energies released during cavitational-bubble collapse (CBC) are responsible for observed UEGT efficiencies.

  13. Evaluation of trends in wheat yield models

    NASA Technical Reports Server (NTRS)

    Ferguson, M. C.

    1982-01-01

    Trend terms in models for wheat yield in the U.S. Great Plains for the years 1932 to 1976 are evaluated. The subset of meteorological variables yielding the largest adjusted R(2) is selected using the method of leaps and bounds. Latent root regression is used to eliminate multicollinearities, and generalized ridge regression is used to introduce bias to provide stability in the data matrix. The regression model used provides for two trends in each of two models: a dependent model in which the trend line is piece-wise continuous, and an independent model in which the trend line is discontinuous at the year of the slope change. It was found that the trend lines best describing the wheat yields consisted of combinations of increasing, decreasing, and constant trend: four combinations for the dependent model and seven for the independent model.

  14. Posterior composite restoration update: focus on factors influencing form and function

    PubMed Central

    Bohaty, Brenda S; Ye, Qiang; Misra, Anil; Sene, Fabio; Spencer, Paulette

    2013-01-01

    Restoring posterior teeth with resin-based composite materials continues to gain popularity among clinicians, and the demand for such aesthetic restorations is increasing. Indeed, the most common aesthetic alternative to dental amalgam is resin composite. Moderate to large posterior composite restorations, however, have higher failure rates, more recurrent caries, and increased frequency of replacement. Investigators across the globe are researching new materials and techniques that will improve the clinical performance, handling characteristics, and mechanical and physical properties of composite resin restorative materials. Despite such attention, large to moderate posterior composite restorations continue to have a clinical lifetime that is approximately one-half that of the dental amalgam. While there are numerous recommendations regarding preparation design, restoration placement, and polymerization technique, current research indicates that restoration longevity depends on several variables that may be difficult for the dentist to control. These variables include the patient’s caries risk, tooth position, patient habits, number of restored surfaces, the quality of the tooth–restoration bond, and the ability of the restorative material to produce a sealed tooth–restoration interface. Although clinicians tend to focus on tooth form when evaluating the success and failure of posterior composite restorations, the emphasis must remain on advancing our understanding of the clinical variables that impact the formation of a durable seal at the restoration–tooth interface. This paper presents an update of existing technology and underscores the mechanisms that negatively impact the durability of posterior composite restorations in permanent teeth. PMID:23750102

  15. Quantization of high dimensional Gaussian vector using permutation modulation with application to information reconciliation in continuous variable QKD

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar

    This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.

  16. Long-Term Variability of AGN at Hard X-Rays

    NASA Technical Reports Server (NTRS)

    Soldi, S.; Beckmann, V.; Baumgartner W. H.; Ponti, G.; Shrader, C. R.; Lubinski, P.; Krimm, H. A.; Mattana, F.; Tueller, J.

    2013-01-01

    Variability at all observed wavelengths is a distinctive property of active galactic nuclei (AGN). Hard X-rays provide us with a view of the innermost regions of AGN, mostly unbiased by absorption along the line of sight. Characterizing the intrinsic hard X-ray variability of a large AGN sample and comparing it to the results obtained at lower X-ray energies can significantly contribute to our understanding of the mechanisms underlying the high-energy radiation. Methods. Swift/BAT provides us with the unique opportunity to follow, on time scales of days to years and with a regular sampling, the 14-195 keV emission of the largest AGN sample available up to date for this kind of investigation. As a continuation of an early work on the first 9 months of BAT data, we study the amplitude of the variations, and their dependence on sub-class and on energy, for a sample of 110 radio quiet and radio loud AGN selected from the BAT 58-month survey. About 80 of the AGN in the sample are found to exhibit significant variability on months to years time scales, radio loud sources being the most variable. The amplitude of the variations and their energy dependence are incompatible with variability being driven at hard X-rays by changes of the absorption column density. In general, the variations in the 14-24 and 35-100 keV bands are well correlated, suggesting a common origin of the variability across the BAT energy band. However, radio quiet AGN display on average 10 larger variations at 14-24 keV than at 35-100 keV and a softer-when-brighter behavior for most of the Seyfert galaxies with detectable spectral variability on month time scale. In addition, sources with harder spectra are found to be more variable than softer ones. These properties are generally consistent with a variable power law continuum, in flux and shape, pivoting at energies 50 keV, to which a constant reflection component is superposed. When the same time scales are considered, the timing properties of AGN at hard X-rays are comparable to those at lower energies, with at least some of the differences possibly ascribable to components contributing differently in the two energy domains (e.g., reflection, absorption).

  17. Interval estimation and optimal design for the within-subject coefficient of variation for continuous and binary variables

    PubMed Central

    Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D

    2006-01-01

    Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943

  18. Considerations of multiple imputation approaches for handling missing data in clinical trials.

    PubMed

    Quan, Hui; Qi, Li; Luo, Xiaodong; Darchy, Loic

    2018-07-01

    Missing data exist in all clinical trials and missing data issue is a very serious issue in terms of the interpretability of the trial results. There is no universally applicable solution for all missing data problems. Methods used for handling missing data issue depend on the circumstances particularly the assumptions on missing data mechanisms. In recent years, if the missing at random mechanism cannot be assumed, conservative approaches such as the control-based and returning to baseline multiple imputation approaches are applied for dealing with the missing data issues. In this paper, we focus on the variability in data analysis of these approaches. As demonstrated by examples, the choice of the variability can impact the conclusion of the analysis. Besides the methods for continuous endpoints, we also discuss methods for binary and time to event endpoints as well as consideration for non-inferiority assessment. Copyright © 2018. Published by Elsevier Inc.

  19. Comparison of neurofuzzy logic and decision trees in discovering knowledge from experimental data of an immediate release tablet formulation.

    PubMed

    Shao, Q; Rowe, R C; York, P

    2007-06-01

    Understanding of the cause-effect relationships between formulation ingredients, process conditions and product properties is essential for developing a quality product. However, the formulation knowledge is often hidden in experimental data and not easily interpretable. This study compares neurofuzzy logic and decision tree approaches in discovering hidden knowledge from an immediate release tablet formulation database relating formulation ingredients (silica aerogel, magnesium stearate, microcrystalline cellulose and sodium carboxymethylcellulose) and process variables (dwell time and compression force) to tablet properties (tensile strength, disintegration time, friability, capping and drug dissolution at various time intervals). Both approaches successfully generated useful knowledge in the form of either "if then" rules or decision trees. Although different strategies are employed by the two approaches in generating rules/trees, similar knowledge was discovered in most cases. However, as decision trees are not able to deal with continuous dependent variables, data discretisation procedures are generally required.

  20. Spatial variability of concentrations of chlorophyll a, dissolved organic matter and suspended particles in the surface layer of the Kara Sea in September 2011 from lidar data

    NASA Astrophysics Data System (ADS)

    Pelevin, V. V.; Zavjalov, P. O.; Belyaev, N. A.; Konovalov, B. V.; Kravchishina, M. D.; Mosharov, S. A.

    2017-01-01

    The article presents results of underway remote laser sensing of the surface water layer in continuous automatic mode using the UFL-9 fluorescent lidar onboard the R/V Akademik Mstislav Keldysh during cruise 59 in the Kara Sea in 2011. The description of the lidar, the approach to interpreting seawater fluorescence data, and certain methodical aspects of instrument calibration and measurement are presented. Calibration of the lidar is based on laboratory analysis of water samples taken from the sea surface during the cruise. Spatial distribution of chlorophyll a, total organic carbon and suspended matter concentrations in the upper quasi-homogeneous layer are mapped and the characteristic scales of the variability are estimated. Some dependencies between the patchiness of the upper water layer and the atmospheric forcing and freshwater runoff are shown.

  1. An opinion-driven behavioral dynamics model for addictive behaviors

    NASA Astrophysics Data System (ADS)

    Moore, Thomas W.; Finley, Patrick D.; Apelberg, Benjamin J.; Ambrose, Bridget K.; Brodsky, Nancy S.; Brown, Theresa J.; Husten, Corinne; Glass, Robert J.

    2015-04-01

    We present a model of behavioral dynamics that combines a social network-based opinion dynamics model with behavioral mapping. The behavioral component is discrete and history-dependent to represent situations in which an individual's behavior is initially driven by opinion and later constrained by physiological or psychological conditions that serve to maintain the behavior. Individuals are modeled as nodes in a social network connected by directed edges. Parameter sweeps illustrate model behavior and the effects of individual parameters and parameter interactions on model results. Mapping a continuous opinion variable into a discrete behavioral space induces clustering on directed networks. Clusters provide targets of opportunity for influencing the network state; however, the smaller the network the greater the stochasticity and potential variability in outcomes. This has implications both for behaviors that are influenced by close relationships verses those influenced by societal norms and for the effectiveness of strategies for influencing those behaviors.

  2. General implementation of arbitrary nonlinear quadrature phase gates

    NASA Astrophysics Data System (ADS)

    Marek, Petr; Filip, Radim; Ogawa, Hisashi; Sakaguchi, Atsushi; Takeda, Shuntaro; Yoshikawa, Jun-ichi; Furusawa, Akira

    2018-02-01

    We propose general methodology of deterministic single-mode quantum interaction nonlinearly modifying single quadrature variable of a continuous-variable system. The methodology is based on linear coupling of the system to ancillary systems subsequently measured by quadrature detectors. The nonlinear interaction is obtained by using the data from the quadrature detection for dynamical manipulation of the coupling parameters. This measurement-induced methodology enables direct realization of arbitrary nonlinear quadrature interactions without the need to construct them from the lowest-order gates. Such nonlinear interactions are crucial for more practical and efficient manipulation of continuous quadrature variables as well as qubits encoded in continuous-variable systems.

  3. Optimality of Gaussian attacks in continuous-variable quantum cryptography.

    PubMed

    Navascués, Miguel; Grosshans, Frédéric; Acín, Antonio

    2006-11-10

    We analyze the asymptotic security of the family of Gaussian modulated quantum key distribution protocols for continuous-variables systems. We prove that the Gaussian unitary attack is optimal for all the considered bounds on the key rate when the first and second momenta of the canonical variables involved are known by the honest parties.

  4. Biomechanical symmetry in elite rugby union players during dynamic tasks: an investigation using discrete and continuous data analysis techniques.

    PubMed

    Marshall, Brendan; Franklyn-Miller, Andrew; Moran, Kieran; King, Enda; Richter, Chris; Gore, Shane; Strike, Siobhán; Falvey, Éanna

    2015-01-01

    While measures of asymmetry may provide a means of identifying individuals predisposed to injury, normative asymmetry values for challenging sport specific movements in elite athletes are currently lacking in the literature. In addition, previous studies have typically investigated symmetry using discrete point analyses alone. This study examined biomechanical symmetry in elite rugby union players using both discrete point and continuous data analysis techniques. Twenty elite injury free international rugby union players (mean ± SD: age 20.4 ± 1.0 years; height 1.86 ± 0.08 m; mass 98.4 ± 9.9 kg) underwent biomechanical assessment. A single leg drop landing, a single leg hurdle hop, and a running cut were analysed. Peak joint angles and moments were examined in the discrete point analysis while analysis of characterising phases (ACP) techniques were used to examine the continuous data. Dominant side was compared to non-dominant side using dependent t-tests for normally distributed data or Wilcoxon signed-rank test for non-normally distributed data. The significance level was set at α = 0.05. The majority of variables were found to be symmetrical with a total of 57/60 variables displaying symmetry in the discrete point analysis and 55/60 in the ACP. The five variables that were found to be asymmetrical were hip abductor moment in the drop landing (p = 0.02), pelvis lift/drop in the drop landing (p = 0.04) and hurdle hop (p = 0.02), ankle internal rotation moment in the cut (p = 0.04) and ankle dorsiflexion angle also in the cut (p = 0.01). The ACP identified two additional asymmetries not identified in the discrete point analysis. Elite injury free rugby union players tended to exhibit bi-lateral symmetry across a range of biomechanical variables in a drop landing, hurdle hop and cut. This study provides useful normative values for inter-limb symmetry in these movement tests. When examining symmetry it is recommended to incorporate continuous data analysis techniques rather than a discrete point analysis alone; a discrete point analysis was unable to detect two of the five asymmetries identified.

  5. Continuous-variable quantum network coding for coherent states

    NASA Astrophysics Data System (ADS)

    Shang, Tao; Li, Ke; Liu, Jian-wei

    2017-04-01

    As far as the spectral characteristic of quantum information is concerned, the existing quantum network coding schemes can be looked on as the discrete-variable quantum network coding schemes. Considering the practical advantage of continuous variables, in this paper, we explore two feasible continuous-variable quantum network coding (CVQNC) schemes. Basic operations and CVQNC schemes are both provided. The first scheme is based on Gaussian cloning and ADD/SUB operators and can transmit two coherent states across with a fidelity of 1/2, while the second scheme utilizes continuous-variable quantum teleportation and can transmit two coherent states perfectly. By encoding classical information on quantum states, quantum network coding schemes can be utilized to transmit classical information. Scheme analysis shows that compared with the discrete-variable paradigms, the proposed CVQNC schemes provide better network throughput from the viewpoint of classical information transmission. By modulating the amplitude and phase quadratures of coherent states with classical characters, the first scheme and the second scheme can transmit 4{log _2}N and 2{log _2}N bits of information by a single network use, respectively.

  6. Analysis of isothermal and cooling-rate-dependent immersion freezing by a unifying stochastic ice nucleation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alpert, Peter A.; Knopf, Daniel A.

    Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature, T, and relative humidity, RH, at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling-rate-dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nucleating particles (INPs) all have the same INP surface area (ISA); however, the validity of this assumption or the impact it may have on analysis and interpretation of the experimentalmore » data is rarely questioned. Descriptions of ice active sites and variability of contact angles have been successfully formulated to describe ice nucleation experimental data in previous research; however, we consider the ability of a stochastic freezing model founded on classical nucleation theory to reproduce previous results and to explain experimental uncertainties and data scatter. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses parameters including the total number of droplets, N tot, and the heterogeneous ice nucleation rate coefficient, J het( T). This model is applied to address if (i) a time and ISA-dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous-flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time-dependent isothermal frozen fractions exhibiting non-exponential behavior can be readily explained by this model considering varying ISA. An apparent cooling-rate dependence of J het is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling-rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. Finally, the model simulations allow for a quantitative experimental uncertainty analysis for parameters N tot, T, RH, and the ISA variability. We discuss the implications of our results for experimental analysis and interpretation of the immersion freezing process.« less

  7. Analysis of isothermal and cooling-rate-dependent immersion freezing by a unifying stochastic ice nucleation model

    DOE PAGES

    Alpert, Peter A.; Knopf, Daniel A.

    2016-02-24

    Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature, T, and relative humidity, RH, at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling-rate-dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nucleating particles (INPs) all have the same INP surface area (ISA); however, the validity of this assumption or the impact it may have on analysis and interpretation of the experimentalmore » data is rarely questioned. Descriptions of ice active sites and variability of contact angles have been successfully formulated to describe ice nucleation experimental data in previous research; however, we consider the ability of a stochastic freezing model founded on classical nucleation theory to reproduce previous results and to explain experimental uncertainties and data scatter. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses parameters including the total number of droplets, N tot, and the heterogeneous ice nucleation rate coefficient, J het( T). This model is applied to address if (i) a time and ISA-dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous-flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time-dependent isothermal frozen fractions exhibiting non-exponential behavior can be readily explained by this model considering varying ISA. An apparent cooling-rate dependence of J het is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling-rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. Finally, the model simulations allow for a quantitative experimental uncertainty analysis for parameters N tot, T, RH, and the ISA variability. We discuss the implications of our results for experimental analysis and interpretation of the immersion freezing process.« less

  8. Analysis of the labor productivity of enterprises via quantile regression

    NASA Astrophysics Data System (ADS)

    Türkan, Semra

    2017-07-01

    In this study, we have analyzed the factors that affect the performance of Turkey's Top 500 Industrial Enterprises using quantile regression. The variable about labor productivity of enterprises is considered as dependent variable, the variableabout assets is considered as independent variable. The distribution of labor productivity of enterprises is right-skewed. If the dependent distribution is skewed, linear regression could not catch important aspects of the relationships between the dependent variable and its predictors due to modeling only the conditional mean. Hence, the quantile regression, which allows modelingany quantilesof the dependent distribution, including the median,appears to be useful. It examines whether relationships between dependent and independent variables are different for low, medium, and high percentiles. As a result of analyzing data, the effect of total assets is relatively constant over the entire distribution, except the upper tail. It hasa moderately stronger effect in the upper tail.

  9. Temporal variability in stage-discharge relationships

    NASA Astrophysics Data System (ADS)

    Guerrero, José-Luis; Westerberg, Ida K.; Halldin, Sven; Xu, Chong-Yu; Lundin, Lars-Christer

    2012-06-01

    SummaryAlthough discharge estimations are central for water management and hydropower, there are few studies on the variability and uncertainty of their basis; deriving discharge from stage heights through the use of a rating curve that depends on riverbed geometry. A large fraction of the world's river-discharge stations are presumably located in alluvial channels where riverbed characteristics may change over time because of erosion and sedimentation. This study was conducted to analyse and quantify the dynamic relationship between stage and discharge and to determine to what degree currently used methods are able to account for such variability. The study was carried out for six hydrometric stations in the upper Choluteca River basin, Honduras, where a set of unusually frequent stage-discharge data are available. The temporal variability and the uncertainty of the rating curve and its parameters were analysed through a Monte Carlo (MC) analysis on a moving window of data using the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. Acceptable ranges for the values of the rating-curve parameters were determined from riverbed surveys at the six stations, and the sampling space was constrained according to those ranges, using three-dimensional alpha shapes. Temporal variability was analysed in three ways: (i) with annually updated rating curves (simulating Honduran practices), (ii) a rating curve for each time window, and (iii) a smoothed, continuous dynamic rating curve derived from the MC analysis. The temporal variability of the rating parameters translated into a high rating-curve variability. The variability could turn out as increasing or decreasing trends and/or cyclic behaviour. There was a tendency at all stations to a seasonal variability. The discharge at a given stage could vary by a factor of two or more. The quotient in discharge volumes estimated from dynamic and static rating curves varied between 0.5 and 1.5. The difference between discharge volumes derived from static and dynamic curves was largest for sub-daily ratings but stayed large also for monthly and yearly totals. The relative uncertainty was largest for low flows but it was considerable also for intermediate and large flows. The standard procedure of adjusting rating curves when calculated and observed discharge differ by more than 5% would have required continuously updated rating curves at the studied locations. We believe that these findings can be applicable to many other discharge stations around the globe.

  10. Use of Continuous Monitors and Autosamplers to Predict Unmeasured Water-Quality Constituents in Tributaries of the Tualatin River, Oregon

    USGS Publications Warehouse

    Anderson, Chauncey W.; Rounds, Stewart A.

    2010-01-01

    Management of water quality in streams of the United States is becoming increasingly complex as regulators seek to control aquatic pollution and ecological problems through Total Maximum Daily Load programs that target reductions in the concentrations of certain constituents. Sediment, nutrients, and bacteria, for example, are constituents that regulators target for reduction nationally and in the Tualatin River basin, Oregon. These constituents require laboratory analysis of discrete samples for definitive determinations of concentrations in streams. Recent technological advances in the nearly continuous, in situ monitoring of related water-quality parameters has fostered the use of these parameters as surrogates for the labor intensive, laboratory-analyzed constituents. Although these correlative techniques have been successful in large rivers, it was unclear whether they could be applied successfully in tributaries of the Tualatin River, primarily because these streams tend to be small, have rapid hydrologic response to rainfall and high streamflow variability, and may contain unique sources of sediment, nutrients, and bacteria. This report evaluates the feasibility of developing correlative regression models for predicting dependent variables (concentrations of total suspended solids, total phosphorus, and Escherichia coli bacteria) in two Tualatin River basin streams: one draining highly urbanized land (Fanno Creek near Durham, Oregon) and one draining rural agricultural land (Dairy Creek at Highway 8 near Hillsboro, Oregon), during 2002-04. An important difference between these two streams is their response to storm runoff; Fanno Creek has a relatively rapid response due to extensive upstream impervious areas and Dairy Creek has a relatively slow response because of the large amount of undeveloped upstream land. Four other stream sites also were evaluated, but in less detail. Potential explanatory variables included continuously monitored streamflow (discharge), stream stage, specific conductance, turbidity, and time (to account for seasonal processes). Preliminary multiple-regression models were identified using stepwise regression and Mallow's Cp, which maximizes regression correlation coefficients and accounts for the loss of additional degrees of freedom when extra explanatory variables are used. Several data scenarios were created and evaluated for each site to assess the representativeness of existing monitoring data and autosampler-derived data, and to assess the utility of the available data to develop robust predictive models. The goodness-of-fit of candidate predictive models was assessed with diagnostic statistics from validation exercises that compared predictions against a subset of the available data. The regression modeling met with mixed success. Functional model forms that have a high likelihood of success were identified for most (but not all) dependent variables at each site, but there were limitations in the available datasets, notably the lack of samples from high-flows. These limitations increase the uncertainty in the predictions of the models and suggest that the models are not yet ready for use in assessing these streams, particularly under high-flow conditions, without additional data collection and recalibration of model coefficients. Nonetheless, the results reveal opportunities to use existing resources more efficiently. Baseline conditions are well represented in the available data, and, for the most part, the models reproduced these conditions well. Future sampling might therefore focus on high flow conditions, without much loss of ability to characterize the baseline. Seasonal cycles, as represented by trigonometric functions of time, were not significant in the evaluated models, perhaps because the baseline conditions are well characterized in the datasets or because the other explanatory variables indirectly incorporate seasonal aspects. Multicollinearity among independent variabl

  11. Improving the use of environmental diversity as a surrogate for species representation.

    PubMed

    Albuquerque, Fabio; Beier, Paul

    2018-01-01

    The continuous p-median approach to environmental diversity (ED) is a reliable way to identify sites that efficiently represent species. A recently developed maximum dispersion (maxdisp) approach to ED is computationally simpler, does not require the user to reduce environmental space to two dimensions, and performed better than continuous p-median for datasets of South African animals. We tested whether maxdisp performs as well as continuous p-median for 12 datasets that included plants and other continents, and whether particular types of environmental variables produced consistently better models of ED. We selected 12 species inventories and atlases to span a broad range of taxa (plants, birds, mammals, reptiles, and amphibians), spatial extents, and resolutions. For each dataset, we used continuous p-median ED and maxdisp ED in combination with five sets of environmental variables (five combinations of temperature, precipitation, insolation, NDVI, and topographic variables) to select environmentally diverse sites. We used the species accumulation index (SAI) to evaluate the efficiency of ED in representing species for each approach and set of environmental variables. Maxdisp ED represented species better than continuous p-median ED in five of 12 biodiversity datasets, and about the same for the other seven biodiversity datasets. Efficiency of ED also varied with type of variables used to define environmental space, but no particular combination of variables consistently performed best. We conclude that maxdisp ED performs at least as well as continuous p-median ED, and has the advantage of faster and simpler computation. Surprisingly, using all 38 environmental variables was not consistently better than using subsets of variables, nor did any subset emerge as consistently best or worst; further work is needed to identify the best variables to define environmental space. Results can help ecologists and conservationists select sites for species representation and assist in conservation planning.

  12. Genetic moderators and psychiatric mediators of the link between sexual abuse and alcohol dependence.

    PubMed

    Copeland, William E; Magnusson, Asa; Göransson, Mona; Heilig, Markus A

    2011-06-01

    This study used a case-control female sample to test psychiatric mediators and genetic moderators of the effect of sexual abuse on later alcohol dependence. The study also tested differences between alcohol dependent women with or without a history of sexual abuse on variables that might affect treatment planning. A case-control design compared 192 treatment-seeking alcohol dependent women with 177 healthy population controls. All participants were assessed for alcohol-related behaviors, sexual abuse history, psychiatric problems, and personality functioning. Markers were genotyped in the CRHR1, MAO-A and OPRM1 genes. The association of sexual abuse with alcohol dependence was limited to the most severe category of sexual abuse involving anal or vaginal penetration. Of the five psychiatric disorders tested, anxiety, anorexia nervosa, and bulimia met criteria as potential mediators of the abuse-alcohol dependence association. Severe sexual abuse continued to have an independent effect on alcohol dependence status even after accounting for these potential mediators. None of the candidate genetic markers moderated the association between sexual abuse and alcohol dependence. Of alcohol dependent participants, those with a history of severe abuse rated higher on alcoholism severity, and psychiatric comorbidities. Sexual abuse is associated with later alcohol problems directly as well as through its effect on psychiatric problems. Treatment-seeking alcohol dependent women with a history of abuse have distinct features as compared to other alcohol dependent women. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Genetic Moderators and Psychiatric Mediators of the link between Sexual Abuse and Alcohol Dependence

    PubMed Central

    Copeland, William E.; Magnusson, Åsa; Göransson, Mona; Heilig, Markus A.

    2011-01-01

    Background/Objective This study used a case-control female sample to test psychiatric mediators and genetic moderators of the effect of sexual abuse on later alcohol dependence. The study also tested differences between alcohol dependent women with or without a history of sexual abuse on variables that that might affect treatment planning. Methods A case-control design compared 192 treatment-seeking alcohol dependent women with 177 healthy population controls. All participants were assessed for alcohol-related behaviors, sexual abuse history, psychiatric problems, and personality functioning. Markers were genotyped in the CRHR1, MAO-A and OPRM1 genes. Results The association of sexual abuse with alcohol dependence was limited to the most severe category of sexual abuse involving anal or vaginal penetration. Of the five psychiatric disorders tested, anxiety, anorexia nervosa, and bulimia met criteria as potential mediators of the abuse-alcohol dependence association. Severe sexual abuse continued to have an independent effect on alcohol dependence status even after accounting for these potential mediators. None of the candidate genetic markers moderated the association between sexual abuse and alcohol dependence. Of alcohol dependent participants, those with a history of severe abuse rated higher on alcoholism severity, and psychiatric comorbidities. Conclusion Sexual abuse is associated with later alcohol problems directly as well as through its effect on psychiatric problems. Treatment-seeking alcohol dependent women with a history of abuse have distinct features as compared to other alcohol dependent women. PMID:21193270

  14. Number versus Continuous Quantity in Numerosity Judgments by Fish

    ERIC Educational Resources Information Center

    Agrillo, Christian; Piffer, Laura; Bisazza, Angelo

    2011-01-01

    In quantity discrimination tasks, adults, infants and animals have been sometimes observed to process number only after all continuous variables, such as area or density, have been controlled for. This has been taken as evidence that processing number may be more cognitively demanding than processing continuous variables. We tested this hypothesis…

  15. Systems effects on family planning innovativeness.

    PubMed

    Lee, S B

    1983-12-01

    Data from Korea were used to explore the importance of community level variables in explaining family planning adoption at the individual level. An open system concept was applied, assuming that individual family planning behavior is influenced by both environmental and individual factors. The environmental factors were measured at the village level and designated as community characteristics. The dimension of communication network variables was introduced. Each individual was characterized in terms of the degree of her involvement in family planning communication with others in her village. It was assumed that the nature of the communication network linking individuals with each other effects family planning adoption at the individual level. Specific objectives were to determine 1) the relative importance of the specific independent variables in explaining family planning adoption and 2) the relative importance of the community level variables in comparison with the individual level variables in explaining family planning adoption at the individual level. The data were originally gathered in a 1973 research project on Korea's mothers' clubs. 1047 respondents were interviewed, comprising all married women in 25 sample villages having mothers' clubs. The dependent variable was family planning adoption behavior, defined as current use of any of the modern methods of family planning. The independent variables were defined at 3 levels: individual, community, and at a level intermediate between them involving communication links between individuals. More of the individual level independent variables were significantly correlated with the dependent variables than the community level variables. Among those variables with statistically significant correlations, the correlation coefficients were consistently higher for the individual level than for the community level variables. More of the variance in the dependent variable was explained by individual level than by community level variables. Community level variables accounted for only about 2.5% of the total variance in the dependent variable, in marked contrast to the result showing individual level variables accounting for as much as 19% of the total variance. When both individual and community level variables were entered into a multiple correlation analysis, a multiple correlation coefficient of .4714 was obtained together they explained about 20% of the total variance. The 2 communication network variables--connectedness and integrativeness--were correlated with the dependent variable at much higher levels than most of the individual or community level variables. Connectedness accounted for the greatest amount of the total variance. The communication network variables as a group explained as much of the total variance in the dependent variable as the individual level variables and greatly more that the community level variables.

  16. Estimating Selected Streamflow Statistics Representative of 1930-2002 in West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2008-01-01

    Regional equations and procedures were developed for estimating 1-, 3-, 7-, 14-, and 30-day 2-year; 1-, 3-, 7-, 14-, and 30-day 5-year; and 1-, 3-, 7-, 14-, and 30-day 10-year hydrologically based low-flow frequency values for unregulated streams in West Virginia. Regional equations and procedures also were developed for estimating the 1-day, 3-year and 4-day, 3-year biologically based low-flow frequency values; the U.S. Environmental Protection Agency harmonic-mean flows; and the 10-, 25-, 50-, 75-, and 90-percent flow-duration values. Regional equations were developed using ordinary least-squares regression using statistics from 117 U.S. Geological Survey continuous streamflow-gaging stations as dependent variables and basin characteristics as independent variables. Equations for three regions in West Virginia - North, South-Central, and Eastern Panhandle - were determined. Drainage area, precipitation, and longitude of the basin centroid are significant independent variables in one or more of the equations. Estimating procedures are presented for determining statistics at a gaging station, a partial-record station, and an ungaged location. Examples of some estimating procedures are presented.

  17. Quantum simulation of quantum field theory using continuous variables

    DOE PAGES

    Marshall, Kevin; Pooser, Raphael C.; Siopsis, George; ...

    2015-12-14

    Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less

  18. Quantum simulation of quantum field theory using continuous variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Kevin; Pooser, Raphael C.; Siopsis, George

    Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less

  19. A new description of Earth's wobble modes using Clairaut coordinates: 1. Theory

    NASA Astrophysics Data System (ADS)

    Rochester, M. G.; Crossley, D. J.; Zhang, Y. L.

    2014-09-01

    This paper presents a novel mathematical reformulation of the theory of the free wobble/nutation of an axisymmetric reference earth model in hydrostatic equilibrium, using the linear momentum description. The new features of this work consist in the use of (i) Clairaut coordinates (rather than spherical polars), (ii) standard spherical harmonics (rather than generalized spherical surface harmonics), (iii) linear operators (rather than J-square symbols) to represent the effects of rotational and ellipticity coupling between dependent variables of different harmonic degree and (iv) a set of dependent variables all of which are continuous across material boundaries. The resulting infinite system of coupled ordinary differential equations is given explicitly, for an elastic solid mantle and inner core, an inviscid outer core and no magnetic field. The formulation is done to second order in the Earth's ellipticity. To this order it is shown that for wobble modes (in which the lowest harmonic in the displacement field is degree 1 toroidal, with azimuthal order m = ±1), it is sufficient to truncate the chain of coupled displacement fields at the toroidal harmonic of degree 5 in the solid parts of the earth model. In the liquid core, however, the harmonic expansion of displacement can in principle continue to indefinitely high degree at this order of accuracy. The full equations are shown to yield correct results in three simple cases amenable to analytic solution: a general earth model in rigid rotation, the tiltover mode in a homogeneous solid earth model and the tiltover and Chandler periods for an incompressible homogeneous solid earth model. Numerical results, from programmes based on this formulation, are presented in part II of this paper.

  20. Effects of fatigue on kinetic and kinematic variables during a 60-second repeated jumps test.

    PubMed

    McNeal, Jeni R; Sands, William A; Stone, Michael H

    2010-06-01

    The aim of this study was to investigate the effects of a maximal repeated-jumps task on force production, muscle activation and kinematics, and to determine if changes in performance were dependent on gender. Eleven male and nine female athletes performed continuous countermovement jumps for 60 s on a force platform while muscle activation was assessed using surface electromyography. Performances were videotaped and digitized (60 Hz). Data were averaged across three jumps in 10-s intervals from the initial jump to the final 10 s of the test. No interaction between time and gender was evident for any variable; therefore, all results represent data collapsed across gender. Preactivation magnitude decreased across time periods for anterior tibialis (AT, P < .001), gastrocnemius (GAS, P < .001) and biceps femoris (BF, P = .03), but not for vastus lateralis (VL, P = .16). Muscle activation during ground contact did not change across time for BF; however, VL, G, and AT showed significant reductions (all P < .001). Peak force was reduced at 40 s compared with the initial jumps, and continued to be reduced at 50 and 60 s (all P < .05). The time from peak force to takeoff was greater at 50 and 60 s compared with the initial jumps (P < .05). Both knee flexion and ankle dorsiflexion were reduced across time (both P < .001), whereas no change in relative hip angle was evident (P = .10). Absolute angle of the trunk increased with time (P < .001), whereas the absolute angle of the shank decreased (P < .001). In response to the fatiguing task, subjects reduced muscle activation and force production and altered jumping technique; however, these changes were not dependent on gender.

  1. Evolution of the Deformation Behavior of Sn-Rich Solders during Cyclic Fatigue

    NASA Astrophysics Data System (ADS)

    Wentlent, Luke Arthur

    Continuous developments in the electronics industry have provided a critical need for a quantitative, fundamental understanding of the behavior of SnAgCu (SAC) solders in both isothermal and thermal fatigue conditions. This study examines the damage behavior of Sn-based solders in a constant amplitude and variable amplitude environment. In addition, damage properties are correlated with crystal orientation and slip behavior. Select solder joints were continuously characterized and tested repeatedly in order to eliminate the joint to joint variation due to the anisotropy of beta-Sn. Characterization was partitioned into three different categories: effective properties and slip behavior, creep mechanisms and crystal morphology development, and atomic behavior and evolution. Active slip systems were correlated with measured properties. Characterization of the mechanical behavior was performed by the calculation and extrapolation of the elastic modulus, work, effective stiffness, Schmid factors, and time-dependent plasticity (creep). Electron microscopy based characterization methods included Scanning Electron Microscopy (SEM), Electron Backscattering Diffraction (EBSD), and Transmission Electron Microscopy (TEM). Testing showed a clear evolution of the steady-state creep mechanism when the cycling amplitudes were varied, from dislocation controlled to diffusion controlled creep. Dislocation behavior was examined and shown to evolve differently in single amplitude vs. variable amplitude testing. Finally, the mechanism of the recrystallization behavior of the beta-Sn was observed. This work fills a gap in the literature, providing a systematic study which identifies how the damage behavior in Sn-alloys depends upon the previous damage. A link is made between the observed creep behavior and the dislocation observations, providing a unified picture. Information developed in this work lays a stepping stone to future fundamental analyses as well as clarifying aspects of the mechanistic behavior of Sn and Sn-based alloys.

  2. A 1.26 μW Cytomimetic IC Emulating Complex Nonlinear Mammalian Cell Cycle Dynamics: Synthesis, Simulation and Proof-of-Concept Measured Results.

    PubMed

    Houssein, Alexandros; Papadimitriou, Konstantinos I; Drakakis, Emmanuel M

    2015-08-01

    Cytomimetic circuits represent a novel, ultra low-power, continuous-time, continuous-value class of circuits, capable of mapping on silicon cellular and molecular dynamics modelled by means of nonlinear ordinary differential equations (ODEs). Such monolithic circuits are in principle able to emulate on chip, single or multiple cell operations in a highly parallel fashion. Cytomimetic topologies can be synthesized by adopting the Nonlinear Bernoulli Cell Formalism (NBCF), a mathematical framework that exploits the striking similarities between the equations describing weakly-inverted Metal-Oxide Semiconductor (MOS) devices and coupled nonlinear ODEs, typically appearing in models of naturally encountered biochemical systems. The NBCF maps biological state variables onto strictly positive subthreshold MOS circuit currents. This paper presents the synthesis, the simulation and proof-of-concept chip results corresponding to the emulation of a complex cellular network mechanism, the skeleton model for the network of Cyclin-dependent Kinases (CdKs) driving the mammalian cell cycle. This five variable nonlinear biological model, when appropriate model parameter values are assigned, can exhibit multiple oscillatory behaviors, varying from simple periodic oscillations, to complex oscillations such as quasi-periodicity and chaos. The validity of our approach is verified by simulated results with realistic process parameters from the commercially available AMS 0.35 μm technology and by chip measurements. The fabricated chip occupies an area of 2.27 mm2 and consumes a power of 1.26 μW from a power supply of 3 V. The presented cytomimetic topology follows closely the behavior of its biological counterpart, exhibiting similar time-dependent solutions of the Cdk complexes, the transcription factors and the proteins.

  3. Patterns in the species-environment relationship depend on both scale and choice of response variables

    Treesearch

    Samuel A. Cushman; Kevin McGarigal

    2004-01-01

    Multi-scale investigations of species/environment relationships are an important tool in ecological research. The scale at which independent and dependent variables are measured, and how they are coded for analysis, can strongly influence the relationships that are discovered. However, little is known about how the coding of the dependent variable set influences...

  4. Increasing body mass index z-score is continuously associated with complications of overweight in children, even in the healthy weight range.

    PubMed

    Bell, Lana M; Byrne, Sue; Thompson, Alisha; Ratnam, Nirubasini; Blair, Eve; Bulsara, Max; Jones, Timothy W; Davis, Elizabeth A

    2007-02-01

    Overweight/obesity in children is increasing. Incidence data for medical complications use arbitrary cutoff values for categories of overweight and obesity. Continuous relationships are seldom reported. The objective of this study is to report relationships of child body mass index (BMI) z-score as a continuous variable with the medical complications of overweight. This study is a part of the larger, prospective cohort Growth and Development Study. Children were recruited from the community through randomly selected primary schools. Overweight children seeking treatment were recruited through tertiary centers. Children aged 6-13 yr were community-recruited normal weight (n = 73), community-recruited overweight (n = 53), and overweight treatment-seeking (n = 51). Medical history, family history, and symptoms of complications of overweight were collected by interview, and physical examination was performed. Investigations included oral glucose tolerance tests, fasting lipids, and liver function tests. Adjusted regression was used to model each complication of obesity with age- and sex-specific child BMI z-scores entered as a continuous dependent variable. Adjusted logistic regression showed the proportion of children with musculoskeletal pain, obstructive sleep apnea symptoms, headaches, depression, anxiety, bullying, and acanthosis nigricans increased with child BMI z-score. Adjusted linear regression showed BMI z-score was significantly related to systolic and diastolic blood pressure, insulin during oral glucose tolerance test, total cholesterol, high-density lipoprotein, triglycerides, and alanine aminotransferase. Child's BMI z-score is independently related to complications of overweight and obesity in a linear or curvilinear fashion. Children's risks of most complications increase across the entire range of BMI values and are not defined by thresholds.

  5. Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)

    NASA Astrophysics Data System (ADS)

    Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi

    2017-06-01

    Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.

  6. Investigation of continuous effect modifiers in a meta-analysis on higher versus lower PEEP in patients requiring mechanical ventilation--protocol of the ICEM study.

    PubMed

    Kasenda, Benjamin; Sauerbrei, Willi; Royston, Patrick; Briel, Matthias

    2014-05-20

    Categorizing an inherently continuous predictor in prognostic analyses raises several critical methodological issues: dependence of the statistical significance on the number and position of the chosen cut-point(s), loss of statistical power, and faulty interpretation of the results if a non-linear association is incorrectly assumed to be linear. This also applies to a therapeutic context where investigators of randomized clinical trials (RCTs) are interested in interactions between treatment assignment and one or more continuous predictors. Our goal is to apply the multivariable fractional polynomial interaction (MFPI) approach to investigate interactions between continuous patient baseline variables and the allocated treatment in an individual patient data meta-analysis of three RCTs (N = 2,299) from the intensive care field. For each study, MFPI will provide a continuous treatment effect function. Functions from each of the three studies will be averaged by a novel meta-analysis approach for functions. We will plot treatment effect functions separately for each study and also the averaged function. The averaged function with a related confidence interval will provide a suitable basis to assess whether a continuous patient characteristic modifies the treatment comparison and may be relevant for clinical decision-making. The compared interventions will be a higher or lower positive end-expiratory pressure (PEEP) ventilation strategy in patients requiring mechanical ventilation. The continuous baseline variables body mass index, PaO2/FiO2, respiratory compliance, and oxygenation index will be the investigated potential effect modifiers. Clinical outcomes for this analysis will be in-hospital mortality, time to death, time to unassisted breathing, and pneumothorax. This project will be the first meta-analysis to combine continuous treatment effect functions derived by the MFPI procedure separately in each of several RCTs. Such an approach requires individual patient data (IPD). They are available from an earlier IPD meta-analysis using different methods for analysis. This new analysis strategy allows assessing whether treatment effects interact with continuous baseline patient characteristics and avoids categorization-based subgroup analyses. These interaction analyses of the present study will be exploratory in nature. However, they may help to foster future research using the MFPI approach to improve interaction analyses of continuous predictors in RCTs and IPD meta-analyses. This study is registered in PROSPERO (CRD42012003129).

  7. Intracranial EEG fluctuates over months after implanting electrodes in human brain

    NASA Astrophysics Data System (ADS)

    Ung, Hoameng; Baldassano, Steven N.; Bink, Hank; Krieger, Abba M.; Williams, Shawniqua; Vitale, Flavia; Wu, Chengyuan; Freestone, Dean; Nurse, Ewan; Leyde, Kent; Davis, Kathryn A.; Cook, Mark; Litt, Brian

    2017-10-01

    Objective. Implanting subdural and penetrating electrodes in the brain causes acute trauma and inflammation that affect intracranial electroencephalographic (iEEG) recordings. This behavior and its potential impact on clinical decision-making and algorithms for implanted devices have not been assessed in detail. In this study we aim to characterize the temporal and spatial variability of continuous, prolonged human iEEG recordings. Approach. Intracranial electroencephalography from 15 patients with drug-refractory epilepsy, each implanted with 16 subdural electrodes and continuously monitored for an average of 18 months, was included in this study. Time and spectral domain features were computed each day for each channel for the duration of each patient’s recording. Metrics to capture post-implantation feature changes and inflexion points were computed on group and individual levels. A linear mixed model was used to characterize transient group-level changes in feature values post-implantation and independent linear models were used to describe individual variability. Main results. A significant decline in features important to seizure detection and prediction algorithms (mean line length, energy, and half-wave), as well as mean power in the Berger and high gamma bands, was observed in many patients over 100 d following implantation. In addition, spatial variability across electrodes declines post-implantation following a similar timeframe. All selected features decreased by 14-50% in the initial 75 d of recording on the group level, and at least one feature demonstrated this pattern in 13 of the 15 patients. Our findings indicate that iEEG signal features demonstrate increased variability following implantation, most notably in the weeks immediately post-implant. Significance. These findings suggest that conclusions drawn from iEEG, both clinically and for research, should account for spatiotemporal signal variability and that properly assessing the iEEG in patients, depending upon the application, may require extended monitoring.

  8. Ordinary Least Squares Estimation of Parameters in Exploratory Factor Analysis with Ordinal Data

    ERIC Educational Resources Information Center

    Lee, Chun-Ting; Zhang, Guangjian; Edwards, Michael C.

    2012-01-01

    Exploratory factor analysis (EFA) is often conducted with ordinal data (e.g., items with 5-point responses) in the social and behavioral sciences. These ordinal variables are often treated as if they were continuous in practice. An alternative strategy is to assume that a normally distributed continuous variable underlies each ordinal variable.…

  9. Pathology in Continuous Infusion Studies in Rodents and Non-Rodents and ITO (Infusion Technology Organisation)-Recommended Protocol for Tissue Sampling and Terminology for Procedure-Related Lesions

    PubMed Central

    Weber, Klaus; Mowat, Vasanthi; Hartmann, Elke; Razinger, Tanja; Chevalier, Hans-Jörg; Blumbach, Kai; Green, Owen P.; Kaiser, Stefan; Corney, Stephen; Jackson, Ailsa; Casadesus, Agustin

    2011-01-01

    Many variables may affect the outcome of continuous infusion studies. The results largely depend on the experience of the laboratory performing these studies, the technical equipment used, the choice of blood vessels and hence the surgical technique as well the quality of pathological evaluation. The latter is of major interest due to the fact that the pathologist is not involved until necropsy in most cases, i.e. not dealing with the complicated surgical or in-life procedures of this study type. The technique of tissue sampling during necropsy and the histology processing procedures may influence the tissues presented for evaluation, hence the pathologist may be a source of misinterpretation. Therefore, ITO proposes a tissue sampling procedure and a standard nomenclature for pathological lesions for all sites and tissues in contact with the port-access and/or catheter system. PMID:22272050

  10. Predator Persistence through Variability of Resource Productivity in Tritrophic Systems.

    PubMed

    Soudijn, Floor H; de Roos, André M

    2017-12-01

    The trophic structure of species communities depends on the energy transfer between trophic levels. Primary productivity varies strongly through time, challenging the persistence of species at higher trophic levels. Yet resource variability has mostly been studied in systems with only one or two trophic levels. We test the effect of variability in resource productivity in a tritrophic model system including a resource, a size-structured consumer, and a size-specific predator. The model complies with fundamental principles of mass conservation and the body-size dependence of individual-level energetics and predator-prey interactions. Surprisingly, we find that resource variability may promote predator persistence. The positive effect of variability on the predator arises through periods with starvation mortality of juvenile prey, which reduces the intraspecific competition in the prey population. With increasing variability in productivity and starvation mortality in the juvenile prey, the prey availability increases in the size range preferred by the predator. The positive effect of prey mortality on the trophic transfer efficiency depends on the biologically realistic consideration of body size-dependent and food-dependent functions for growth and reproduction in our model. Our findings show that variability may promote the trophic transfer efficiency, indicating that environmental variability may sustain species at higher trophic levels in natural ecosystems.

  11. Characterizing the continuously acquired cardiovascular time series during hemodialysis, using median hybrid filter preprocessing noise reduction.

    PubMed

    Wilson, Scott; Bowyer, Andrea; Harrap, Stephen B

    2015-01-01

    The clinical characterization of cardiovascular dynamics during hemodialysis (HD) has important pathophysiological implications in terms of diagnostic, cardiovascular risk assessment, and treatment efficacy perspectives. Currently the diagnosis of significant intradialytic systolic blood pressure (SBP) changes among HD patients is imprecise and opportunistic, reliant upon the presence of hypotensive symptoms in conjunction with coincident but isolated noninvasive brachial cuff blood pressure (NIBP) readings. Considering hemodynamic variables as a time series makes a continuous recording approach more desirable than intermittent measures; however, in the clinical environment, the data signal is susceptible to corruption due to both impulsive and Gaussian-type noise. Signal preprocessing is an attractive solution to this problem. Prospectively collected continuous noninvasive SBP data over the short-break intradialytic period in ten patients was preprocessed using a novel median hybrid filter (MHF) algorithm and compared with 50 time-coincident pairs of intradialytic NIBP measures from routine HD practice. The median hybrid preprocessing technique for continuously acquired cardiovascular data yielded a dynamic regression without significant noise and artifact, suitable for high-level profiling of time-dependent SBP behavior. Signal accuracy is highly comparable with standard NIBP measurement, with the added clinical benefit of dynamic real-time hemodynamic information.

  12. Discrete event simulation tool for analysis of qualitative models of continuous processing systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T. (Inventor); Basham, Bryan D. (Inventor); Harris, Richard A. (Inventor)

    1990-01-01

    An artificial intelligence design and qualitative modeling tool is disclosed for creating computer models and simulating continuous activities, functions, and/or behavior using developed discrete event techniques. Conveniently, the tool is organized in four modules: library design module, model construction module, simulation module, and experimentation and analysis. The library design module supports the building of library knowledge including component classes and elements pertinent to a particular domain of continuous activities, functions, and behavior being modeled. The continuous behavior is defined discretely with respect to invocation statements, effect statements, and time delays. The functionality of the components is defined in terms of variable cluster instances, independent processes, and modes, further defined in terms of mode transition processes and mode dependent processes. Model construction utilizes the hierarchy of libraries and connects them with appropriate relations. The simulation executes a specialized initialization routine and executes events in a manner that includes selective inherency of characteristics through a time and event schema until the event queue in the simulator is emptied. The experimentation and analysis module supports analysis through the generation of appropriate log files and graphics developments and includes the ability of log file comparisons.

  13. Unitary Response Regression Models

    ERIC Educational Resources Information Center

    Lipovetsky, S.

    2007-01-01

    The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

  14. The turbulent mean-flow, Reynolds-stress, and heat flux equations in mass-averaged dependent variables

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.; Rose, W. C.

    1973-01-01

    The time-dependent, turbulent mean-flow, Reynolds stress, and heat flux equations in mass-averaged dependent variables are presented. These equations are given in conservative form for both generalized orthogonal and axisymmetric coordinates. For the case of small viscosity and thermal conductivity fluctuations, these equations are considerably simpler than the general Reynolds system of dependent variables for a compressible fluid and permit a more direct extension of low speed turbulence modeling to computer codes describing high speed turbulence fields.

  15. Computer vision syndrome among computer office workers in a developing country: an evaluation of prevalence and risk factors.

    PubMed

    Ranasinghe, P; Wathurapatha, W S; Perera, Y S; Lamabadusuriya, D A; Kulatunga, S; Jayawardana, N; Katulanda, P

    2016-03-09

    Computer vision syndrome (CVS) is a group of visual symptoms experienced in relation to the use of computers. Nearly 60 million people suffer from CVS globally, resulting in reduced productivity at work and reduced quality of life of the computer worker. The present study aims to describe the prevalence of CVS and its associated factors among a nationally-representative sample of Sri Lankan computer workers. Two thousand five hundred computer office workers were invited for the study from all nine provinces of Sri Lanka between May and December 2009. A self-administered questionnaire was used to collect socio-demographic data, symptoms of CVS and its associated factors. A binary logistic regression analysis was performed in all patients with 'presence of CVS' as the dichotomous dependent variable and age, gender, duration of occupation, daily computer usage, pre-existing eye disease, not using a visual display terminal (VDT) filter, adjusting brightness of screen, use of contact lenses, angle of gaze and ergonomic practices knowledge as the continuous/dichotomous independent variables. A similar binary logistic regression analysis was performed in all patients with 'severity of CVS' as the dichotomous dependent variable and other continuous/dichotomous independent variables. Sample size was 2210 (response rate-88.4%). Mean age was 30.8 ± 8.1 years and 50.8% of the sample were males. The 1-year prevalence of CVS in the study population was 67.4%. Female gender (OR: 1.28), duration of occupation (OR: 1.07), daily computer usage (1.10), pre-existing eye disease (OR: 4.49), not using a VDT filter (OR: 1.02), use of contact lenses (OR: 3.21) and ergonomics practices knowledge (OR: 1.24) all were associated with significantly presence of CVS. The duration of occupation (OR: 1.04) and presence of pre-existing eye disease (OR: 1.54) were significantly associated with the presence of 'severe CVS'. Sri Lankan computer workers had a high prevalence of CVS. Female gender, longer duration of occupation, higher daily computer usage, pre-existing eye disease, not using a VDT filter, use of contact lenses and higher ergonomics practices knowledge all were associated with significantly with the presence of CVS. The factors associated with the severity of CVS were the duration of occupation and presence of pre-existing eye disease.

  16. Statistical identification of gene association by CID in application of constructing ER regulatory network

    PubMed Central

    Liu, Li-Yu D; Chen, Chien-Yu; Chen, Mei-Ju M; Tsai, Ming-Shian; Lee, Cho-Han S; Phang, Tzu L; Chang, Li-Yun; Kuo, Wen-Hung; Hwa, Hsiao-Lin; Lien, Huang-Chun; Jung, Shih-Ming; Lin, Yi-Shing; Chang, King-Jen; Hsieh, Fon-Jou

    2009-01-01

    Background A variety of high-throughput techniques are now available for constructing comprehensive gene regulatory networks in systems biology. In this study, we report a new statistical approach for facilitating in silico inference of regulatory network structure. The new measure of association, coefficient of intrinsic dependence (CID), is model-free and can be applied to both continuous and categorical distributions. When given two variables X and Y, CID answers whether Y is dependent on X by examining the conditional distribution of Y given X. In this paper, we apply CID to analyze the regulatory relationships between transcription factors (TFs) (X) and their downstream genes (Y) based on clinical data. More specifically, we use estrogen receptor α (ERα) as the variable X, and the analyses are based on 48 clinical breast cancer gene expression arrays (48A). Results The analytical utility of CID was evaluated in comparison with four commonly used statistical methods, Galton-Pearson's correlation coefficient (GPCC), Student's t-test (STT), coefficient of determination (CoD), and mutual information (MI). When being compared to GPCC, CoD, and MI, CID reveals its preferential ability to discover the regulatory association where distribution of the mRNA expression levels on X and Y does not fit linear models. On the other hand, when CID is used to measure the association of a continuous variable (Y) against a discrete variable (X), it shows similar performance as compared to STT, and appears to outperform CoD and MI. In addition, this study established a two-layer transcriptional regulatory network to exemplify the usage of CID, in combination with GPCC, in deciphering gene networks based on gene expression profiles from patient arrays. Conclusion CID is shown to provide useful information for identifying associations between genes and transcription factors of interest in patient arrays. When coupled with the relationships detected by GPCC, the association predicted by CID are applicable to the construction of transcriptional regulatory networks. This study shows how information from different data sources and learning algorithms can be integrated to investigate whether relevant regulatory mechanisms identified in cell models can also be partially re-identified in clinical samples of breast cancers. Availability the implementation of CID in R codes can be freely downloaded from . PMID:19292896

  17. Dependence of Halo Bias and Kinematics on Assembly Variables

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoju; Zheng, Zheng

    2018-06-01

    Using dark matter haloes identified in a large N-body simulation, we study halo assembly bias, with halo formation time, peak maximum circular velocity, concentration, and spin as the assembly variables. Instead of grouping haloes at fixed mass into different percentiles of each assembly variable, we present the joint dependence of halo bias on the values of halo mass and each assembly variable. In the plane of halo mass and one assembly variable, the joint dependence can be largely described as halo bias increasing outward from a global minimum. We find it unlikely to have a combination of halo variables to absorb all assembly bias effects. We then present the joint dependence of halo bias on two assembly variables at fixed halo mass. The gradient of halo bias does not necessarily follow the correlation direction of the two assembly variables and it varies with halo mass. Therefore in general for two correlated assembly variables one cannot be used as a proxy for the other in predicting halo assembly bias trend. Finally, halo assembly is found to affect the kinematics of haloes. Low-mass haloes formed earlier can have much higher pairwise velocity dispersion than those of massive haloes. In general, halo assembly leads to a correlation between halo bias and halo pairwise velocity distribution, with more strongly clustered haloes having higher pairwise velocity and velocity dispersion. However, the correlation is not tight, and the kinematics of haloes at fixed halo bias still depends on halo mass and assembly variables.

  18. Secure quantum key distribution using continuous variables of single photons.

    PubMed

    Zhang, Lijian; Silberhorn, Christine; Walmsley, Ian A

    2008-03-21

    We analyze the distribution of secure keys using quantum cryptography based on the continuous variable degree of freedom of entangled photon pairs. We derive the information capacity of a scheme based on the spatial entanglement of photons from a realistic source, and show that the standard measures of security known for quadrature-based continuous variable quantum cryptography (CV-QKD) are inadequate. A specific simple eavesdropping attack is analyzed to illuminate how secret information may be distilled well beyond the bounds of the usual CV-QKD measures.

  19. Tuning Into Brown Dwarfs: Long-Term Radio Monitoring of Two Very Low Mass Dwarfs

    NASA Astrophysics Data System (ADS)

    Van Linge, Russell; Burgasser, Adam J.; Melis, Carl; Williams, Peter K. G.

    2017-01-01

    The very lowest-mass (VLM) stars and brown dwarfs, with effective temperatures T < 3000 K, exhibit mixed magnetic activity trends, with H-alpha and X-ray emission that declines rapidly beyond type M7/M8, but persistent radio emission in roughly 10-20% of sources. The dozen or so VLM radio emitters known show a broad range of emission characteristics and time-dependent behavior, including steady persistent emission, periodic oscillations, periodic polarized bursts, and aperiodic flares. Understanding the evolution of these variability patterns, and in particular whether they undergo solar-like cycles, requires long-term monitoring. We report the results of a long-term JVLA monitoring program of two magnetically-active VLM dwarf binaries, the young M7 2MASS 1314+1320AB and older L5 2MASS 1315-2649AB. On the bi-weekly cadence, 2MASS 1314 continues to show variability by revealing regular flaring while 2MASS 1315 continues to be a quiescent emitter. On the daily time scale, both sources show a mean flux density that can vary significantly just over a few days. These results suggest long-term radio behavior in radio-emitting VLM dwarfs is just as diverse and complex as short-term behavior.

  20. White matter lesion severity is associated with reduced cognitive performances in patients with normal CSF Abeta42 levels.

    PubMed

    Stenset, V; Hofoss, D; Johnsen, L; Skinningsrud, A; Berstad, A E; Negaard, A; Reinvang, I; Gjerstad, L; Fladby, T

    2008-12-01

    To identify possible associations between white matter lesions (WML) and cognition in patients with memory complaints, stratified in groups with normal and low cerebrospinal fluid (CSF) Abeta42 values. 215 consecutive patients with subjective memory complaints were retrospectively included. Patients were stratified into two groups with normal (n = 127) or low (n = 88) CSF Abeta42 levels (cut-off is 450 ng/l). Cognitive scores from the Mini-Mental State Examination (MMSE) and the Neurobehavioral Cognitive Status Examination (Cognistat) were used as continuous dependent variables in linear regression. WML load was used as a continuous independent variable and was scored with a visual rating scale. The regression model was corrected for possible confounding factors. WML were significantly associated with MMSE and all Cognistat subscores except language (repetition and naming) and attention in patients with normal CSF Abeta42 levels. No significant associations were observed in patients with low CSF Abeta42. WML were associated with affection of multiple cognitive domains, including delayed recall and executive functions, in patients with normal CSF Abeta42 levels. The lack of such associations for patients with low CSF Abeta42 (i.e. with evidence for amyloid deposition), suggests that amyloid pathology may obscure cognitive effects of WML.

  1. Inferring network structure in non-normal and mixed discrete-continuous genomic data.

    PubMed

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2018-03-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. © 2017, The International Biometric Society.

  2. Inferring network structure in non-normal and mixed discrete-continuous genomic data

    PubMed Central

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2017-01-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. PMID:28437848

  3. Criteria for genuine N -partite continuous-variable entanglement and Einstein-Podolsky-Rosen steering

    NASA Astrophysics Data System (ADS)

    Teh, R. Y.; Reid, M. D.

    2014-12-01

    Following previous work, we distinguish between genuine N -partite entanglement and full N -partite inseparability. Accordingly, we derive criteria to detect genuine multipartite entanglement using continuous-variable (position and momentum) measurements. Our criteria are similar but different to those based on the van Loock-Furusawa inequalities, which detect full N -partite inseparability. We explain how the criteria can be used to detect the genuine N -partite entanglement of continuous variable states generated from squeezed and vacuum state inputs, including the continuous-variable Greenberger-Horne-Zeilinger state, with explicit predictions for up to N =9 . This makes our work accessible to experiment. For N =3 , we also present criteria for tripartite Einstein-Podolsky-Rosen (EPR) steering. These criteria provide a means to demonstrate a genuine three-party EPR paradox, in which any single party is steerable by the remaining two parties.

  4. A Hierarchical Modulation Coherent Communication Scheme for Simultaneous Four-State Continuous-Variable Quantum Key Distribution and Classical Communication

    NASA Astrophysics Data System (ADS)

    Yang, Can; Ma, Cheng; Hu, Linxi; He, Guangqiang

    2018-06-01

    We present a hierarchical modulation coherent communication protocol, which simultaneously achieves classical optical communication and continuous-variable quantum key distribution. Our hierarchical modulation scheme consists of a quadrature phase-shifting keying modulation for classical communication and a four-state discrete modulation for continuous-variable quantum key distribution. The simulation results based on practical parameters show that it is feasible to transmit both quantum information and classical information on a single carrier. We obtained a secure key rate of 10^{-3} bits/pulse to 10^{-1} bits/pulse within 40 kilometers, and in the meantime the maximum bit error rate for classical information is about 10^{-7}. Because continuous-variable quantum key distribution protocol is compatible with standard telecommunication technology, we think our hierarchical modulation scheme can be used to upgrade the digital communication systems to extend system function in the future.

  5. Universal Quantum Computing with Arbitrary Continuous-Variable Encoding.

    PubMed

    Lau, Hoi-Kwan; Plenio, Martin B

    2016-09-02

    Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.

  6. Universal Quantum Computing with Arbitrary Continuous-Variable Encoding

    NASA Astrophysics Data System (ADS)

    Lau, Hoi-Kwan; Plenio, Martin B.

    2016-09-01

    Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.

  7. Continuous-variable protocol for oblivious transfer in the noisy-storage model.

    PubMed

    Furrer, Fabian; Gehring, Tobias; Schaffner, Christian; Pacher, Christoph; Schnabel, Roman; Wehner, Stephanie

    2018-04-13

    Cryptographic protocols are the backbone of our information society. This includes two-party protocols which offer protection against distrustful players. Such protocols can be built from a basic primitive called oblivious transfer. We present and experimentally demonstrate here a quantum protocol for oblivious transfer for optical continuous-variable systems, and prove its security in the noisy-storage model. This model allows us to establish security by sending more quantum signals than an attacker can reliably store during the protocol. The security proof is based on uncertainty relations which we derive for continuous-variable systems, that differ from the ones used in quantum key distribution. We experimentally demonstrate in a proof-of-principle experiment the proposed oblivious transfer protocol for various channel losses by using entangled two-mode squeezed states measured with balanced homodyne detection. Our work enables the implementation of arbitrary two-party quantum cryptographic protocols with continuous-variable communication systems.

  8. Continuous-variable quantum Gaussian process regression and quantum singular value decomposition of nonsparse low-rank matrices

    NASA Astrophysics Data System (ADS)

    Das, Siddhartha; Siopsis, George; Weedbrook, Christian

    2018-02-01

    With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.

  9. A novel approach for evaluating the impact of fixed variables on photovoltaic (PV) solar installations using enhanced meta data analysis among higher education institutions in the United States

    NASA Astrophysics Data System (ADS)

    De Hoyos, Diane N.

    The global demand for electric energy has continuously increased over the last few decades. Some mature, alternative generation methods are wind, power, photovoltaic panels, biogas and fuel cells. In order to find alternative sources of energy to aid in the reduction of our nation's dependency on non-renewable fuels, energy sources include the use of solar energy panels. The intent of these initiatives is to provide substantial energy savings and reduce dependence on the electrical grid and net metering savings during the peak energy-use hours. The focus of this study explores and provides a clearer picture of the adoption of solar photovoltaic technology in institutions of higher education. It examines the impact of different variables associated with a photovoltaic installation in an institutions of higher education in the United States on the production generations for universities. Secondary data was used with permission from the Advancement of Suitability in Higher Education (AASHE). A multiple regression analysis was performed to determine the impact of different variables on the energy generation production. A Meta Data transformation analysis offered a deeper investigation into the impact of the variables on the photovoltaic installations. Although a review of a significant number of journal articles, dissertations and thesis in the area of photovoltaic solar installations are available, there were limited studies of actual institutions of higher education with the significant volume of institutions. However a study where the database included a significant number of data variables is unique and provides a researcher the opportunity to investigate different facets of a solar installation. The data of the installations ranges from 1993-2015. Included in this observation are the researcher's experience both in the procurement industry and as a team member of a solar institution of higher education in the southern portion of the United States.

  10. HIV and HLA Class I: an evolving relationship

    PubMed Central

    Goulder, Philip J.R.; Walker, Bruce D

    2014-01-01

    Successful vaccine development for infectious diseases has largely been achieved in settings where natural immunity to the pathogen results in clearance in at least some individuals. HIV presents an additional challenge in that natural clearance of infection does not occur, and the correlates of immune protection are still uncertain. However, partial control of viremia and markedly different outcomes of disease are observed in HIV infected persons. Here we examine the antiviral mechanisms implicated by one variable that has been consistently associated with extremes of outcome, namely HLA class I alleles, and in particular HLA-B, and examine the mechanisms by which this modulation is likely to occur, and the impact of these interactions on evolution of the virus and the host. Studies to date provide evidence for both HLA-dependent and epitope-dependent influences on viral control and viral evolution, and have important implications for the continued quest for an effective HIV vaccine. PMID:22999948

  11. Wearable sensor-based objective assessment of motor symptoms in Parkinson's disease.

    PubMed

    Ossig, Christiana; Antonini, Angelo; Buhmann, Carsten; Classen, Joseph; Csoti, Ilona; Falkenburger, Björn; Schwarz, Michael; Winkler, Jürgen; Storch, Alexander

    2016-01-01

    Effective management and development of new treatment strategies of motor symptoms in Parkinson's disease (PD) largely depend on clinical rating instruments like the Unified PD rating scale (UPDRS) and the modified abnormal involuntary movement scale (mAIMS). Regarding inter-rater variability and continuous monitoring, clinical rating scales have various limitations. Patient-administered questionnaires such as the PD home diary to assess motor stages and fluctuations in late-stage PD are frequently used in clinical routine and as clinical trial endpoints, but diary/questionnaire are tiring, and recall bias impacts on data quality, particularly in patients with cognitive dysfunction or depression. Consequently, there is a strong need for continuous and objective monitoring of motor symptoms in PD for improving therapeutic regimen and for usage in clinical trials. Recent advances in battery technology, movement sensors such as gyroscopes, accelerometers and information technology boosted the field of objective measurement of movement in everyday life and medicine using wearable sensors allowing continuous (long-term) monitoring. This systematic review summarizes the current wearable sensor-based devices to objectively assess the various motor symptoms of PD.

  12. Elective neck dissection for primary oral cavity squamous cell carcinoma involving the tongue should include sublevel IIb.

    PubMed

    Maher, Nigel Gordon; Hoffman, Gary Russell

    2014-11-01

    The surgical clearance of sublevel IIb lymph nodes, facilitated by neck dissection, increases the risk of postoperative shoulder dysfunction. Our study purpose was to determine the value of including sublevel IIb in elective neck dissections for primary oral cavity squamous cell carcinoma (OCSCC). A retrospective cohort study based on a review of the pathology records accumulated by 1 head and neck surgeon was conducted for 71 patients with clinically node-negative, primary OCSCC treated from 2006 to June 2013. The predictor variables were the oral cavity subsite and tumor clinicopathologic characteristics (ie, perineural, perivascular, and perilymphatic invasion, tumor depth, and T stage). The primary outcome variable was the presence of sublevel IIb metastasis. The secondary outcome variables were the survival and tumor recurrence rates and metastases to any cervical level. Descriptive statistics were calculated for the categorical and continuous variables. A comparison of categorical variables was performed using Fisher's exact test; for continuous variables, t tests or the Mann-Whitney U test were used for 2 groups and analysis of variance or Kruskal-Wallis tests (with Bonferroni's correction) were used for more than 2 groups, depending on the distribution. Disease-specific survival (DSS) analyses were plotted for the predictor variables and patients with sublevel IIb metastasis. Competing risks models were created using the Fine and Gray method (SAS macro %PSHREG) to provide estimates of the crude and adjusted subhazard ratios for DSS for all variables. A total of 71 patients were included in the present study, of whom 69% were male. The greatest proportion of oral cavity subsites was from the tongue and floor of mouth. The overall frequency of sublevel IIb lymphatic metastases at neck dissection was 5.6% of the patient cohort. Sublevel IIb metastases occurred from the primary sites involving the tongue (n = 3) and retromolar trigone (n = 1). The incidence of perilymphatic and perivascular invasion was significantly associated with sublevel IIb lymphatic metastases (P < .02). Sublevel IIb is likely to be an important region to incorporate in elective neck dissections for primary OCSCC involving the tongue. More studies are needed, with greater numbers, to clarify the risk of metastasis to sublevel IIb from oral cavity subsites in primary OCSCC with clinically node-negative necks. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.

  13. Digital mapping of soil properties in Canadian managed forests at 250 m of resolution using the k-nearest neighbor method

    NASA Astrophysics Data System (ADS)

    Mansuy, N. R.; Paré, D.; Thiffault, E.

    2015-12-01

    Large-scale mapping of soil properties is increasingly important for environmental resource management. Whileforested areas play critical environmental roles at local and global scales, forest soil maps are typically at lowresolution.The objective of this study was to generate continuous national maps of selected soil variables (C, N andsoil texture) for the Canadian managed forest landbase at 250 m resolution. We produced these maps using thekNN method with a training dataset of 538 ground-plots fromthe National Forest Inventory (NFI) across Canada,and 18 environmental predictor variables. The best predictor variables were selected (7 topographic and 5 climaticvariables) using the Least Absolute Shrinkage and Selection Operator method. On average, for all soil variables,topographic predictors explained 37% of the total variance versus 64% for the climatic predictors. Therelative root mean square error (RMSE%) calculated with the leave-one-out cross-validation method gave valuesranging between 22% and 99%, depending on the soil variables tested. RMSE values b 40% can be considered agood imputation in light of the low density of points used in this study. The study demonstrates strong capabilitiesfor mapping forest soil properties at 250m resolution, compared with the current Soil Landscape of CanadaSystem, which is largely oriented towards the agricultural landbase. The methodology used here can potentiallycontribute to the national and international need for spatially explicit soil information in resource managementscience.

  14. On which timescales do gas transfer velocities control North Atlantic CO2 flux variability?

    NASA Astrophysics Data System (ADS)

    Couldrey, Matthew; Oliver, Kevin; Yool, Andrew; Halloran, Paul; Achterberg, Eric

    2016-04-01

    The North Atlantic is an important basin for the global ocean's uptake of anthropogenic and natural carbon dioxide (CO2), but the mechanisms controlling this carbon flux are not fully understood. The air-sea flux of CO2, F, is the product of a gas transfer velocity, k, the air-sea CO2concentration gradient, ΔpCO2, and the temperature and salinity-dependent solubility coefficient, α. k is difficult to constrain, representing the dominant uncertainty in F on short (instantaneous to interannual) timescales. Previous work shows that in the North Atlantic, ΔpCO2and k both contribute significantly to interannual F variability, but that k is unimportant for multidecadal variability. On some timescale between interannual and multidecadal, gas transfer velocity variability and its associated uncertainty become negligible. Here, we quantify this critical timescale for the first time. Using an ocean model, we determine the importance of k, ΔpCO2and α on a range of timescales. On interannual and shorter timescales, both ΔpCO2and k are important controls on F. In contrast, pentadal to multidecadal North Atlantic flux variability is driven almost entirely by ΔpCO2; k contributes less than 25%. Finally, we explore how accurately one can estimate North Atlantic F without a knowledge of non-seasonal k variability, finding it possible for interannual and longer timescales. These findings suggest that continued efforts to better constrain gas transfer velocities are necessary to quantify interannual variability in the North Atlantic carbon sink. However, uncertainty in k variability is unlikely to limit the accuracy of estimates of longer term flux variability.

  15. On which timescales do gas transfer velocities control North Atlantic CO2 flux variability?

    NASA Astrophysics Data System (ADS)

    Couldrey, Matthew P.; Oliver, Kevin I. C.; Yool, Andrew; Halloran, Paul R.; Achterberg, Eric P.

    2016-05-01

    The North Atlantic is an important basin for the global ocean's uptake of anthropogenic and natural carbon dioxide (CO2), but the mechanisms controlling this carbon flux are not fully understood. The air-sea flux of CO2, F, is the product of a gas transfer velocity, k, the air-sea CO2 concentration gradient, ΔpCO2, and the temperature- and salinity-dependent solubility coefficient, α. k is difficult to constrain, representing the dominant uncertainty in F on short (instantaneous to interannual) timescales. Previous work shows that in the North Atlantic, ΔpCO2 and k both contribute significantly to interannual F variability but that k is unimportant for multidecadal variability. On some timescale between interannual and multidecadal, gas transfer velocity variability and its associated uncertainty become negligible. Here we quantify this critical timescale for the first time. Using an ocean model, we determine the importance of k, ΔpCO2, and α on a range of timescales. On interannual and shorter timescales, both ΔpCO2 and k are important controls on F. In contrast, pentadal to multidecadal North Atlantic flux variability is driven almost entirely by ΔpCO2; k contributes less than 25%. Finally, we explore how accurately one can estimate North Atlantic F without a knowledge of nonseasonal k variability, finding it possible for interannual and longer timescales. These findings suggest that continued efforts to better constrain gas transfer velocities are necessary to quantify interannual variability in the North Atlantic carbon sink. However, uncertainty in k variability is unlikely to limit the accuracy of estimates of longer-term flux variability.

  16. On which timescales do gas transfer velocities control North Atlantic CO2 flux variability?

    NASA Astrophysics Data System (ADS)

    Couldrey, M.; Oliver, K. I. C.; Yool, A.; Halloran, P. R.; Achterberg, E. P.

    2016-02-01

    The North Atlantic is an important basin for the global ocean's uptake of anthropogenic and natural carbon dioxide (CO2), but the mechanisms controlling this carbon flux are not fully understood. The air-sea flux of CO2, F, is the product of a gas transfer velocity, k, the air-sea CO2 concentration gradient, ΔpCO2, and the temperature and salinity-dependent solubility coefficient, α. k is difficult to constrain, representing the dominant uncertainty in F on short (instantaneous to interannual) timescales. Previous work shows that in the North Atlantic, ΔpCO2 and k both contribute significantly to interannual F variability, but that k is unimportant for multidecadal variability. On some timescale between interannual and multidecadal, gas transfer velocity variability and its associated uncertainty become negligible. Here, we quantify this critical timescale for the first time. Using an ocean model, we determine the importance of k, ΔpCO2 and α on a range of timescales. On interannual and shorter timescales, both ΔpCO2 and k are important controls on F. In contrast, pentadal to multidecadal North Atlantic flux variability is driven almost entirely by ΔpCO2; k contributes less than 25%. Finally, we explore how accurately one can estimate North Atlantic F without a knowledge of non-seasonal k variability, finding it possible for interannual and longer timescales. These findings suggest that continued efforts to better constrain gas transfer velocities are necessary to quantify interannual variability in the North Atlantic carbon sink. However, uncertainty in k variability is unlikely to limit the accuracy of estimates of longer term flux variability.

  17. Reward-Dependent Modulation of Movement Variability

    PubMed Central

    Izawa, Jun; Shadmehr, Reza

    2015-01-01

    Movement variability is often considered an unwanted byproduct of a noisy nervous system. However, variability can signal a form of implicit exploration, indicating that the nervous system is intentionally varying the motor commands in search of actions that yield the greatest success. Here, we investigated the role of the human basal ganglia in controlling reward-dependent motor variability as measured by trial-to-trial changes in performance during a reaching task. We designed an experiment in which the only performance feedback was success or failure and quantified how reach variability was modulated as a function of the probability of reward. In healthy controls, reach variability increased as the probability of reward decreased. Control of variability depended on the history of past rewards, with the largest trial-to-trial changes occurring immediately after an unrewarded trial. In contrast, in participants with Parkinson's disease, a known example of basal ganglia dysfunction, reward was a poor modulator of variability; that is, the patients showed an impaired ability to increase variability in response to decreases in the probability of reward. This was despite the fact that, after rewarded trials, reach variability in the patients was comparable to healthy controls. In summary, we found that movement variability is partially a form of exploration driven by the recent history of rewards. When the function of the human basal ganglia is compromised, the reward-dependent control of movement variability is impaired, particularly affecting the ability to increase variability after unsuccessful outcomes. PMID:25740529

  18. Toward quantum plasmonic networks

    DOE PAGES

    Holtfrerich, M. W.; Dowran, M.; Davidson, R.; ...

    2016-08-30

    Here, we demonstrate the transduction of macroscopic quantum entanglement by independent, distant plasmonic structures embedded in separate thin silver films. In particular, we show that the plasmon-mediated transmission through each film conserves spatially dependent, entangled quantum images, opening the door for the implementation of parallel quantum protocols, super-resolution imaging, and quantum plasmonic sensing geometries at the nanoscale level. The conservation of quantum information by the transduction process shows that continuous variable multi-mode entanglement is momentarily transferred from entangled beams of light to the space-like separated, completely independent plasmonic structures, thus providing a first important step toward establishing a multichannel quantummore » network across separate solid-state substrates.« less

  19. Quasivariational Solutions for First Order Quasilinear Equations with Gradient Constraint

    NASA Astrophysics Data System (ADS)

    Rodrigues, José Francisco; Santos, Lisa

    2012-08-01

    We prove the existence of solutions for a quasi-variational inequality of evolution with a first order quasilinear operator and a variable convex set which is characterized by a constraint on the absolute value of the gradient that depends on the solution itself. The only required assumption on the nonlinearity of this constraint is its continuity and positivity. The method relies on an appropriate parabolic regularization and suitable a priori estimates. We also obtain the existence of stationary solutions by studying the asymptotic behaviour in time. In the variational case, corresponding to a constraint independent of the solution, we also give uniqueness results.

  20. Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects

    PubMed Central

    Baumann, Hendrik; Sandmann, Werner

    2016-01-01

    Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity. PMID:27010993

  1. Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects.

    PubMed

    Baumann, Hendrik; Sandmann, Werner

    2016-01-01

    Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity.

  2. Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2011-01-01

    A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.

  3. Patient Continued Use of Online Health Care Communities: Web Mining of Patient-Doctor Communication.

    PubMed

    Wu, Bing

    2018-04-16

    In practice, online health communities have passed the adoption stage and reached the diffusion phase of development. In this phase, patients equipped with knowledge regarding the issues involved in health care are capable of switching between different communities to maximize their online health community activities. Online health communities employ doctors to answer patient questions, and high quality online health communities are more likely to be acknowledged by patients. Therefore, the factors that motivate patients to maintain ongoing relationships with online health communities must be addressed. However, this has received limited scholarly attention. The purpose of this study was to identify the factors that drive patients to continue their use of online health communities where doctor-patient communication occurs. This was achieved by integrating the information system success model with online health community features. A Web spider was used to download and extract data from one of the most authoritative Chinese online health communities in which communication occurs between doctors and patients. The time span analyzed in this study was from January 2017 to March 2017. A sample of 469 valid anonymous patients with 9667 posts was obtained (the equivalent of 469 respondents in survey research). A combination of Web mining and structural equation modeling was then conducted to test the research hypotheses. The results show that the research framework for integrating the information system success model and online health community features contributes to our understanding of the factors that drive patients' relationships with online health communities. The primary findings are as follows: (1) perceived usefulness is found to be significantly determined by three exogenous variables (ie, social support, information quality, and service quality; R 2 =0.88). These variables explain 87.6% of the variance in perceived usefulness of online health communities; (2) similarly, patient satisfaction was found to be significantly determined by the three variables listed above (R 2 =0.69). These variables explain 69.3% of the variance seen in patient satisfaction; (3) continuance use (dependent variable) is significantly influenced by perceived usefulness and patient satisfaction (R 2 =0.93). That is, the combined effects of perceived usefulness and patient satisfaction explain 93.4% of the variance seen in continuance use; and (4) unexpectedly, individual literacy had no influence on perceived usefulness and satisfaction of patients using online health communities. First, this study contributes to the existing literature on the continuance use of online health communities using an empirical approach. Second, an appropriate metric was developed to assess constructs related to the proposed research model. Additionally, a Web spider enabled us to acquire objective data relatively easily and frequently, thereby overcoming a major limitation of survey techniques. ©Bing Wu. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.04.2018.

  4. Naltrexone and Cognitive Behavioral Therapy for the Treatment of Alcohol Dependence

    PubMed Central

    Baros, AM; Latham, PK; Anton, RF

    2008-01-01

    Background Sex differences in regards to pharmacotherapy for alcoholism is a topic of concern following publications suggesting naltrexone, one of the longest approved treatments of alcoholism, is not as effective in women as in men. This study was conducted by combining two randomized placebo controlled clinical trials utilizing similar methodologies and personnel in which the data was amalgamated to evaluate sex effects in a reasonable sized sample. Methods 211 alcoholics (57 female; 154 male) were randomized to the naltrexone/CBT or placebo/CBT arm of the two clinical trials analyzed. Baseline variables were examined for differences between sex and treatment groups via analysis of variance (ANOVA) for continuous variable or chi-square test for categorical variables. All initial outcome analysis was conducted under an intent-to-treat analysis plan. Effect sizes for naltrexone over placebo were determined by Cohen’s D (d). Results The effect size of naltrexone over placebo for the following outcome variables was similar in men and women (%days abstinent (PDA) d=0.36, %heavy drinking days (PHDD) d=0.36 and total standard drinks (TSD) d=0.36). Only for men were the differences significant secondary to the larger sample size (PDA p=0.03; PHDD p=0.03; TSD p=0.04). There were a few variables (GGT at wk-12 change from baseline to week-12: men d=0.36, p=0.05; women d=0.20, p=0.45 and drinks per drinking day: men d=0.36, p=0.05; women d=0.28, p=0.34) where the naltrexone effect size for men was greater than women. In women, naltrexone tended to increase continuous abstinent days before a first drink (women d-0.46, p=0.09; men d=0.00, p=0.44). Conclusions The effect size of naltrexone over placebo appeared similar in women and men in our hands suggesting the findings of sex differences in naltrexone response might have to do with sample size and/or endpoint drinking variables rather than any inherent pharmacological or biological differences in response. PMID:18336635

  5. Factors influencing the atmospheric concentrations of PCBs at an abandoned e-waste recycling site in South China.

    PubMed

    Wang, Yan; Wu, Xiaowei; Hou, Minmin; Zhao, Hongxia; Chen, Ruize; Luo, Chunling; Zhang, Gan

    2017-02-01

    The diurnal atmospheric concentrations of polychlorinated biphenyls (PCBs) were investigated at an abandoned e-waste recycling site in South China during winter and summer. Total PCB concentrations during winter and summer were 27.6-212 and 368-1704pg/m 3 in the particulate phase and 270-697 and 3000-15,500pg/m 3 in the gaseous phase, respectively. Both gaseous and particulate PCB concentrations and compositions exhibited significant difference between winter and summer samples, but no diurnal variations during the measurement period. The correlation analysis between PCB concentrations and meteorological conditions, including atmospheric temperature, humidity, and mixing layer height, suggested that the seasonal variability of atmospheric PCB concentrations was strongly temperature-dependent, while the diurnal variability was probably source-dependent. The temperature-driven variations can also be proved by the significant linear correlation between ln P and 1/T in the Clausius-Clapeyron plot. Although government has implemented controls to reduce e-waste pollution, both the relatively high concentrations of PCBs and the diurnal variation in the air suggested that emissions from occasional e-waste recycling activities may still exist in this recycling area. These results underline the importance of continuing e-waste recycling site management long after abandonment. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Development of a fractional-step method for the unsteady incompressible Navier-Stokes equations in generalized coordinate systems

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Moshe; Kwak, Dochan; Vinokur, Marcel

    1992-01-01

    A fractional step method is developed for solving the time-dependent three-dimensional incompressible Navier-Stokes equations in generalized coordinate systems. The primitive variable formulation uses the pressure, defined at the center of the computational cell, and the volume fluxes across the faces of the cells as the dependent variables, instead of the Cartesian components of the velocity. This choice is equivalent to using the contravariant velocity components in a staggered grid multiplied by the volume of the computational cell. The governing equations are discretized by finite volumes using a staggered mesh system. The solution of the continuity equation is decoupled from the momentum equations by a fractional step method which enforces mass conservation by solving a Poisson equation. This procedure, combined with the consistent approximations of the geometric quantities, is done to satisfy the discretized mass conservation equation to machine accuracy, as well as to gain the favorable convergence properties of the Poisson solver. The momentum equations are solved by an approximate factorization method, and a novel ZEBRA scheme with four-color ordering is devised for the efficient solution of the Poisson equation. Several two- and three-dimensional laminar test cases are computed and compared with other numerical and experimental results to validate the solution method. Good agreement is obtained in all cases.

  7. Highly Parallel Alternating Directions Algorithm for Time Dependent Problems

    NASA Astrophysics Data System (ADS)

    Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.

    2011-11-01

    In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.

  8. Sex differences in rank attainment and research activities among academic psychiatrists.

    PubMed

    Leibenluft, E; Dial, T H; Haviland, M G; Pincus, H A

    1993-11-01

    Data from a survey distributed to all full-time faculty in academic departments of psychiatry were used to examine possible sex differences in research activities and rank attainment among psychiatrists. A total of 1923 psychiatrists responded, 1564 men (81.3%) and 359 women (18.7%). Continuous dependent variables were analyzed by using analyses of covariance with the year graduated from medical school as a covariate. For categorical dependent variables, the sample was divided into four 10-year cohorts based on the year graduated from medical school, and differences between men and women were analyzed with chi 2 tests. Over the entire sample, men were more likely than women to have had research training, to have ever been principal investigators on peer-reviewed grants, to mentor research trainees, to be currently involved in research activities, and to meet defined criteria as a "researcher." Many gender differences remained significant after controlling for seniority and research training. In every cohort, the men had attained higher academic rank than the women. In general, differences in research activity and productivity were most marked in the youngest cohort. To ensure a rich talent pool for psychiatric research, efforts must be made to recruit and support researchers from among the increased number of women in psychiatry.

  9. Practice and transfer of the frequency structures of continuous isometric force.

    PubMed

    King, Adam C; Newell, Karl M

    2014-04-01

    The present study examined the learning, retention and transfer of task outcome and the frequency-dependent properties of isometric force output dynamics. During practice participants produced isometric force to a moderately irregular target pattern either under a constant or variable presentation. Immediate and delayed retention tests examined the persistence of practice-induced changes of force output dynamics and transfer tests investigated performance to novel (low and high) irregular target patterns. The results showed that both constant and variable practice conditions exhibited similar reductions in task error but that the frequency-dependent properties were differentially modified across the entire bandwidth (0-12Hz) of force output dynamics as a function of practice. Task outcome exhibited persistent properties on the delayed retention test whereas the retention of faster time scales processes (i.e., 4-12Hz) of force output was mediated as a function of frequency structure. The structure of the force frequency components during early practice and following a rest interval was characterized by an enhanced emphasis on the slow time scales related to perceptual-motor feedback. The findings support the proposition that there are different time scales of learning at the levels of task outcome and the adaptive frequency bandwidths of force output dynamics. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Design approaches to experimental mediation☆

    PubMed Central

    Pirlott, Angela G.; MacKinnon, David P.

    2016-01-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259

  11. Design approaches to experimental mediation.

    PubMed

    Pirlott, Angela G; MacKinnon, David P

    2016-09-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.

  12. [Effect of stock abundance and environmental factors on the recruitment success of small yellow croaker in the East China Sea].

    PubMed

    Liu, Zun-lei; Yuan, Xing-wei; Yang, Lin-lin; Yan, Li-ping; Zhang, Hui; Cheng, Jia-hua

    2015-02-01

    Multiple hypotheses are available to explain recruitment rate. Model selection methods can be used to identify the best model that supports a particular hypothesis. However, using a single model for estimating recruitment success is often inadequate for overexploited population because of high model uncertainty. In this study, stock-recruitment data of small yellow croaker in the East China Sea collected from fishery dependent and independent surveys between 1992 and 2012 were used to examine density-dependent effects on recruitment success. Model selection methods based on frequentist (AIC, maximum adjusted R2 and P-values) and Bayesian (Bayesian model averaging, BMA) methods were applied to identify the relationship between recruitment and environment conditions. Interannual variability of the East China Sea environment was indicated by sea surface temperature ( SST) , meridional wind stress (MWS), zonal wind stress (ZWS), sea surface pressure (SPP) and runoff of Changjiang River ( RCR). Mean absolute error, mean squared predictive error and continuous ranked probability score were calculated to evaluate the predictive performance of recruitment success. The results showed that models structures were not consistent based on three kinds of model selection methods, predictive variables of models were spawning abundance and MWS by AIC, spawning abundance by P-values, spawning abundance, MWS and RCR by maximum adjusted R2. The recruitment success decreased linearly with stock abundance (P < 0.01), suggesting overcompensation effect in the recruitment success might be due to cannibalism or food competition. Meridional wind intensity showed marginally significant and positive effects on the recruitment success (P = 0.06), while runoff of Changjiang River showed a marginally negative effect (P = 0.07). Based on mean absolute error and continuous ranked probability score, predictive error associated with models obtained from BMA was the smallest amongst different approaches, while that from models selected based on the P-value of the independent variables was the highest. However, mean squared predictive error from models selected based on the maximum adjusted R2 was highest. We found that BMA method could improve the prediction of recruitment success, derive more accurate prediction interval and quantitatively evaluate model uncertainty.

  13. An Optimization-Based Approach to Injector Element Design

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar; Turner, Jim (Technical Monitor)

    2000-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for gaseous oxygen/gaseous hydrogen (GO2/GH2) injector elements. A swirl coaxial element and an unlike impinging element (a fuel-oxidizer-fuel triplet) are used to facilitate the study. The elements are optimized in terms of design variables such as fuel pressure drop, APf, oxidizer pressure drop, deltaP(sub f), combustor length, L(sub comb), and full cone swirl angle, theta, (for the swirl element) or impingement half-angle, alpha, (for the impinging element) at a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for both element types. Method i is then used to generate response surfaces for each dependent variable for both types of elements. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail for each element type. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the element design is illustrated. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, combining results from both elements to simulate a trade study, thrust-to-weight trends are illustrated and examined in detail.

  14. Selection of key ambient particulate variables for epidemiological studies - applying cluster and heatmap analyses as tools for data reduction.

    PubMed

    Gu, Jianwei; Pitz, Mike; Breitner, Susanne; Birmili, Wolfram; von Klot, Stephanie; Schneider, Alexandra; Soentgen, Jens; Reller, Armin; Peters, Annette; Cyrys, Josef

    2012-10-01

    The success of epidemiological studies depends on the use of appropriate exposure variables. The purpose of this study is to extract a relatively small selection of variables characterizing ambient particulate matter from a large measurement data set. The original data set comprised a total of 96 particulate matter variables that have been continuously measured since 2004 at an urban background aerosol monitoring site in the city of Augsburg, Germany. Many of the original variables were derived from measured particle size distribution (PSD) across the particle diameter range 3 nm to 10 μm, including size-segregated particle number concentration, particle length concentration, particle surface concentration and particle mass concentration. The data set was complemented by integral aerosol variables. These variables were measured by independent instruments, including black carbon, sulfate, particle active surface concentration and particle length concentration. It is obvious that such a large number of measured variables cannot be used in health effect analyses simultaneously. The aim of this study is a pre-screening and a selection of the key variables that will be used as input in forthcoming epidemiological studies. In this study, we present two methods of parameter selection and apply them to data from a two-year period from 2007 to 2008. We used the agglomerative hierarchical cluster method to find groups of similar variables. In total, we selected 15 key variables from 9 clusters which are recommended for epidemiological analyses. We also applied a two-dimensional visualization technique called "heatmap" analysis to the Spearman correlation matrix. 12 key variables were selected using this method. Moreover, the positive matrix factorization (PMF) method was applied to the PSD data to characterize the possible particle sources. Correlations between the variables and PMF factors were used to interpret the meaning of the cluster and the heatmap analyses. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Quantifying variabilty of the solar resource using the Kriging method

    NASA Astrophysics Data System (ADS)

    Monger, Samuel Haze

    Energy consumption will steadily rise in coming years and if fossil fuels, particularly coal, continue to be the primary resource for electricity generation our planet is going to face many hardships. Solar energy is the most abundant resource available to humankind, and although solar generated power is still expensive, the technology is in a state of rapid development as governments strive to meet renewable energy goals as part of the effort to slow climate change and become less dependent on finite resources. However there are many valid concerns associated with integrating high levels of solar energy with the transmission grid due to the rapid changes in power output and voltage from photovoltaic generated electricity due to drops in the solar resource. Therefore, a study was conducted to address issues in this field of research by attempting to quantify the variability of solar irradiance at a specific area using a uniform grid of 45 irradiance sensors. Another goal of this study was to determine if fewer measurement stations could be used in the quantification of variability. This thesis addresses these issues by using the Sandia Variability Index and the dead band ramp algorithm in a statistical analysis on irradiance fluctuations in the regulation and sub-regulation time frames. A kriging method will be introduced which accurately predicts variability using only four stations.

  16. Extending ROSAT Light Curves of Ecliptic Pole AGN Formation and Galaxy Evolution

    NASA Technical Reports Server (NTRS)

    Malkan, Matthew A.

    1997-01-01

    In collaboration with UCLA graduate student Fred Baganoff, Professor Malkan has obtained the longest continuous light curves ever available for a large sample (# = 60) of active galactic nuclei. This was accomplished by using the ROSATAII-Sky Survey, which covered the ecliptic pole regions once every 9O-minute orbit. Using this Astrophysics Data Processing grant from NASA, we extended these light curves by combining the RASS data with pointed observations over the next several years of operation of the ROSAT PSPC. This lengthens the baselines of about half of the light curves from a few months up to a few years. The proportion of AGN showing variability increases substantially with this improvement. In fact most AGN in this representative sample are now shown to be significantly variable in the X-rays. We are also able to say something about the amplitudes of variability on timescales from days to years, with more detail than previously has been possible. We have also identified some dependence of the X-ray variability properties on a) the luminosity of the AGN; and b) The presence of a "Blazar" nucleus. By extending the ROSAT light curves, we are also able to learn more about the correlation of X-ray and optical emission on longer time-scales. It appears to be very weak, at best.

  17. Clarifying the role of mean centring in multicollinearity of interaction effects.

    PubMed

    Shieh, Gwowen

    2011-11-01

    Moderated multiple regression (MMR) is frequently employed to analyse interaction effects between continuous predictor variables. The procedure of mean centring is commonly recommended to mitigate the potential threat of multicollinearity between predictor variables and the constructed cross-product term. Also, centring does typically provide more straightforward interpretation of the lower-order terms. This paper attempts to clarify two methodological issues of potential confusion. First, the positive and negative effects of mean centring on multicollinearity diagnostics are explored. It is illustrated that the mean centring method is, depending on the characteristics of the data, capable of either increasing or decreasing various measures of multicollinearity. Second, the exact reason why mean centring does not affect the detection of interaction effects is given. The explication shows the symmetrical influence of mean centring on the corrected sum of squares and variance inflation factor of the product variable while maintaining the equivalence between the two residual sums of squares for the regression of the product term on the two predictor variables. Thus the resulting test statistic remains unchanged regardless of the obvious modification of multicollinearity with mean centring. These findings provide a clear understanding and demonstration on the diverse impact of mean centring in MMR applications. ©2011 The British Psychological Society.

  18. Long-Acting Injectable Antipsychotics for Schizophrenia: Sociodemographic Characteristics and Treatment Adherence.

    PubMed

    McCreath, James; Larson, Essie; Bharatiya, Purabi; Labanieh, Hisham A; Weiss, Zvi; Lozovatsky, Michael

    2017-02-23

    Long-acting injectable (LAI) antipsychotic medications are employed universally for the treatment of schizophrenia. This study retrospectively assessed the variables that factor into an individual's adherence to LAIs. The data sample was obtained from the adult ambulatory services of a large general hospital mental health center located in Elizabeth, New Jersey. Reports were run in November 2015 to identify patients who had received at least 1 LAI between January 1, 2014, and October 14, 2015. In September 2016, an additional report was run to collect follow-up data. The sample included 120 women and 178 men, ranging in age from 18-81 years, who received at least 1 LAI during a 23-month period. A hazard analysis for single-decrement, nonrepeatable events was used to assess the risk of discontinuation of LAIs during the study period. Separate χ² analyses were conducted to assess differences in discontinuation rates for sociodemographic variables, program type variables, type of long-acting medication, and time effects. The cumulative continuation rate across the study period was 73%. Main effect differences were found in continuation rates for program type (χ²₂undefined= 10.252, P = .006), LAI type (χ²₅ = 23.365, P < .000), and prescribed frequency of LAI (χ²₂ = 7.622, P = .022). In addition, multiple time-dependent effect differences were found. No significant main effect results were found for LAI continuation rates and patient age (χ²₃ = 3.689, P = .297), sex (χ²₁ = 0.904, P = .342), race (χ²₃ = 5.785, P = .123), or enrollment in involuntary outpatient commitment (χ²₁ = 2.989, P = .084). The findings of the current research suggest that medication type, frequency of medication appointments, and program type may be key in increasing and maintaining LAI adherence. © Copyright 2017 Physicians Postgraduate Press, Inc.

  19. Applying probabilistic well-performance parameters to assessments of shale-gas resources

    USGS Publications Warehouse

    Charpentier, Ronald R.; Cook, Troy

    2010-01-01

    In assessing continuous oil and gas resources, such as shale gas, it is important to describe not only the ultimately producible volumes, but also the expected well performance. This description is critical to any cost analysis or production scheduling. A probabilistic approach facilitates (1) the inclusion of variability in well performance within a continuous accumulation, and (2) the use of data from developed accumulations as analogs for the assessment of undeveloped accumulations. In assessing continuous oil and gas resources of the United States, the U.S. Geological Survey analyzed production data from many shale-gas accumulations. Analyses of four of these accumulations (the Barnett, Woodford, Fayetteville, and Haynesville shales) are presented here as examples of the variability of well performance. For example, the distribution of initial monthly production rates for Barnett vertical wells shows a noticeable change with time, first increasing because of improved completion practices, then decreasing from a combination of decreased reservoir pressure (in infill wells) and drilling in less productive areas. Within a partially developed accumulation, historical production data from that accumulation can be used to estimate production characteristics of undrilled areas. An understanding of the probabilistic relations between variables, such as between initial production and decline rates, can improve estimates of ultimate production. Time trends or spatial trends in production data can be clarified by plots and maps. The data can also be divided into subsets depending on well-drilling or well-completion techniques, such as vertical in relation to horizontal wells. For hypothetical or lightly developed accumulations, one can either make comparisons to a specific well-developed accumulation or to the entire range of available developed accumulations. Comparison of the distributions of initial monthly production rates of the four shale-gas accumulations that were studied shows substantial overlap. However, because of differences in decline rates among them, the resulting estimated ultimate recovery (EUR) distributions are considerably different.

  20. Variables in psychology: a critique of quantitative psychology.

    PubMed

    Toomela, Aaro

    2008-09-01

    Mind is hidden from direct observation; it can be studied only by observing behavior. Variables encode information about behaviors. There is no one-to-one correspondence between behaviors and mental events underlying the behaviors, however. In order to understand mind it would be necessary to understand exactly what information is represented in variables. This aim cannot be reached after variables are already encoded. Therefore, statistical data analysis can be very misleading in studies aimed at understanding mind that underlies behavior. In this article different kinds of information that can be represented in variables are described. It is shown how informational ambiguity of variables leads to problems of theoretically meaningful interpretation of the results of statistical data analysis procedures in terms of hidden mental processes. Reasons are provided why presence of dependence between variables does not imply causal relationship between events represented by variables and absence of dependence between variables cannot rule out the causal dependence of events represented by variables. It is concluded that variable-psychology has a very limited range of application for the development of a theory of mind-psychology.

  1. Variability, constraints, and creativity. Shedding light on Claude Monet.

    PubMed

    Stokes, P D

    2001-04-01

    Recent experimental research suggests 2 things. The first is that along with learning how to do something, people also learn how variably or differently to continue doing it. The second is that high variability is maintained by constraining, precluding a currently successful, often repetitive solution to a problem. In this view, Claude Monet's habitually high level of variability in painting was acquired during his childhood and early apprenticeship and was maintained throughout his adult career by a continuous series of task constraints imposed by the artist on his own work. For Monet, variability was rewarded and rewarding.

  2. Ibogaine for treating drug dependence. What is a safe dose?

    PubMed

    Schep, L J; Slaughter, R J; Galea, S; Newcombe, D

    2016-09-01

    The indole alkaloid ibogaine, present in the root bark of the West African rain forest shrub Tabernanthe iboga, has been adopted in the West as a treatment for drug dependence. Treatment of patients requires large doses of the alkaloid to cause hallucinations, an alleged integral part of the patient's treatment regime. However, case reports and case series continue to describe evidences of ataxia, gastrointestinal distress, ventricular arrhythmias and sudden and unexplained deaths of patients undergoing treatment for drug dependence. High doses of ibogaine act on several classes of neurological receptors and transporters to achieve pharmacological responses associated with drug aversion; limited toxicology research suggests that intraperitoneal doses used to successfully treat rodents, for example, have also been shown to cause neuronal injury (purkinje cells) in the rat cerebellum. Limited research suggests lethality in rodents by the oral route can be achieved at approximately 263mg/kg body weight. To consider an appropriate and safe initial dose for humans, necessary safety factors need to be applied to the animal data; these would include factors such as intra- and inter-species variability and for susceptible people in a population (such as drug users). A calculated initial dose to treat patients could be approximated at 0.87mg/kg body weight, substantially lower than those presently being administered to treat drug users. Morbidities and mortalities will continue to occur unless practitioners reconsider doses being administered to their susceptible patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Effects of local and widespread muscle fatigue on movement timing.

    PubMed

    Cowley, Jeffrey C; Dingwell, Jonathan B; Gates, Deanna H

    2014-12-01

    Repetitive movements can cause muscle fatigue, leading to motor reorganization, performance deficits, and/or possible injury. The effects of fatigue may depend on the type of fatigue task employed, however. The purpose of this study was to determine how local fatigue of a specific muscle group versus widespread fatigue of various muscle groups affected the control of movement timing. Twenty healthy subjects performed an upper extremity low-load work task similar to sawing for 5 continuous minutes both before and after completing a protocol that either fatigued all the muscles used in the task (widespread fatigue) or a protocol that selectively fatigued the primary muscles used to execute the pushing stroke of the sawing task (localized fatigue). Subjects were instructed to time their movements with a metronome. Timing error, movement distance, and speed were calculated for each movement. Data were then analyzed using a goal-equivalent manifold approach to quantify changes in goal-relevant and non-goal-relevant variability. We applied detrended fluctuation analysis to each time series to quantify changes in fluctuation dynamics that reflected changes in the control strategies used. After localized fatigue, subjects made shorter, slower movements and exerted greater control over non-goal-relevant variability. After widespread fatigue, subjects exerted less control over non-goal-relevant variability and did not change movement patterns. Thus, localized and widespread muscle fatigue affected movement differently. Local fatigue may reduce the available motor solutions and therefore cause greater movement reorganization than widespread muscle fatigue. Subjects altered their control strategies but continued to achieve the timing goal after both fatigue tasks.

  4. District nursing workforce planning: a review of the methods.

    PubMed

    Reid, Bernie; Kane, Kay; Curran, Carol

    2008-11-01

    District nursing services in Northern Ireland face increasing demands and challenges which may be responded to by effective and efficient workforce planning and development. The aim of this paper is to critically analyse district nursing workforce planning and development methods, in an attempt to find a suitable method for Northern Ireland. A systematic analysis of the literature reveals four methods: professional judgement; population-based health needs; caseload analysis and dependency-acuity. Each method has strengths and weaknesses. Professional judgement offers a 'belt and braces' approach but lacks sensitivity to fluctuating patient numbers. Population-based health needs methods develop staffing algorithms that reflect deprivation and geographical spread, but are poorly understood by district nurses. Caseload analysis promotes equitable workloads but poorly performing district nursing localities may continue if benchmarking processes only consider local data. Dependency-acuity methods provide a means of equalizing and prioritizing workload but are prone to district nurses overstating factors in patient dependency or understating carers' capability. In summary a mixed method approach is advocated to evaluate and adjust the size and mix of district nursing teams using empirically determined patient dependency and activity-based variables based on the population's health needs.

  5. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  6. Equivalence between entanglement and the optimal fidelity of continuous variable teleportation.

    PubMed

    Adesso, Gerardo; Illuminati, Fabrizio

    2005-10-07

    We devise the optimal form of Gaussian resource states enabling continuous-variable teleportation with maximal fidelity. We show that a nonclassical optimal fidelity of N-user teleportation networks is necessary and sufficient for N-party entangled Gaussian resources, yielding an estimator of multipartite entanglement. The entanglement of teleportation is equivalent to the entanglement of formation in a two-user protocol, and to the localizable entanglement in a multiuser one. Finally, we show that the continuous-variable tangle, quantifying entanglement sharing in three-mode Gaussian states, is defined operationally in terms of the optimal fidelity of a tripartite teleportation network.

  7. Invited Article: Generation of one-million-mode continuous-variable cluster state by unlimited time-domain multiplexing

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Jun-ichi; Yokoyama, Shota; Kaji, Toshiyuki; Sornphiphatphong, Chanond; Shiozawa, Yu; Makino, Kenzo; Furusawa, Akira

    2016-09-01

    In recent quantum optical continuous-variable experiments, the number of fully inseparable light modes has drastically increased by introducing a multiplexing scheme either in the time domain or in the frequency domain. Here, modifying the time-domain multiplexing experiment reported in the work of Yokoyama et al. [Nat. Photonics 7, 982 (2013)], we demonstrate the successive generation of fully inseparable light modes for more than one million modes. The resulting multi-mode state is useful as a dual-rail continuous variable cluster state. We circumvent the previous problem of optical phase drifts, which has limited the number of fully inseparable light modes to around ten thousands, by continuous feedback control of the optical system.

  8. The Mediating Roles of Internal Context Variables in the Relationship between Distributed Leadership Perceptions and Continuous Change Behaviours of Public School Teachers

    ERIC Educational Resources Information Center

    Kondakci, Yasar; Zayim, Merve; Beycioglu, Kadir; Sincar, Mehmet; Ugurlu, Celal T

    2016-01-01

    This study aims at building a theoretical base for continuous change in education and using this base to test the mediating roles of two key contextual variables, knowledge sharing and trust, in the relationship between the distributed leadership perceptions and continuous change behaviours of teachers. Data were collected from 687 public school…

  9. ADHD and the externalizing spectrum: direct comparison of categorical, continuous, and hybrid models of liability in a nationally representative sample.

    PubMed

    Carragher, Natacha; Krueger, Robert F; Eaton, Nicholas R; Markon, Kristian E; Keyes, Katherine M; Blanco, Carlos; Saha, Tulshi D; Hasin, Deborah S

    2014-08-01

    Alcohol use disorders, substance use disorders, and antisocial personality disorder share a common externalizing liability, which may also include attention-deficit hyperactivity disorder (ADHD). However, few studies have compared formal quantitative models of externalizing liability, with the aim of delineating the categorical and/or continuous nature of this liability in the community. This study compares categorical, continuous, and hybrid models of externalizing liability. Data were derived from the 2004-2005 National Epidemiologic Survey on Alcohol and Related Conditions (N = 34,653). Seven disorders were modeled: childhood ADHD and lifetime diagnoses of antisocial personality disorder (ASPD), nicotine dependence, alcohol dependence, marijuana dependence, cocaine dependence, and other substance dependence. The continuous latent trait model provided the best fit to the data. Measurement invariance analyses supported the fit of the model across genders, with females displaying a significantly lower probability of experiencing externalizing disorders. Cocaine dependence, marijuana dependence, other substance dependence, alcohol dependence, ASPD, nicotine dependence, and ADHD provided the greatest information, respectively, about the underlying externalizing continuum. Liability to externalizing disorders is continuous and dimensional in severity. The findings have important implications for the organizational structure of externalizing psychopathology in psychiatric nomenclatures.

  10. Health status: does it predict choice in further education?

    PubMed Central

    Koivusilta, L; Rimpelä, A; Rimpelä, M

    1995-01-01

    STUDY OBJECTIVE--To study the significance of a young person's health to his or her choice of further education at age 16. DESIGN--A cross sectional population survey SETTING--The whole of Finland. PARTICIPANTS--A representative sample of 2977 Finnish 16 year olds. The response rate was 83%. MEASUREMENTS AND MAIN RESULTS--The three outcome variables reflected successive steps on the way to educational success: school attendance after the completion of compulsory schooling, the type of school, and school achievement for those at school. Continuing their education and choosing upper secondary school were most typical of young people from upper social classes. Female gender and living with both parents increased the probability of choosing to go on to upper secondary school. Over and above these background variables, some health factors had additional explanatory power. Continuing their education, attending upper secondary schools, and good achievement were typical of those who considered their health to be good. Chronically ill adolescents were more likely to continue their education than the healthy ones. CONCLUSIONS--School imposes great demands on young people, thus revealing differences in personal health resources. Adaptation to the norms of a society in which education is highly valued is related to satisfying health status. In a welfare state that offers equal educational opportunities for everyone, however, chronically ill adolescents can add to their resources for coping through schooling. Health related selection thus works differently for various indicators of health and in various kinds of societies. Social class differences in health in the future may be more dependent on personally experienced health problems than on medically diagnosed diseases. PMID:7798039

  11. Environmental variability and population dynamics: Do European and North American ducks play by the same rules?

    USGS Publications Warehouse

    Pöysä, Hannu; Rintala, Jukka; Johnson, Douglas H.; Kauppinen, Jukka; Lammi, Esa; Nudds, Thomas D.; Väänänen, Veli-Matti

    2016-01-01

    Density dependence, population regulation, and variability in population size are fundamental population processes, the manifestation and interrelationships of which are affected by environmental variability. However, there are surprisingly few empirical studies that distinguish the effect of environmental variability from the effects of population processes. We took advantage of a unique system, in which populations of the same duck species or close ecological counterparts live in highly variable (north American prairies) and in stable (north European lakes) environments, to distinguish the relative contributions of environmental variability (measured as between-year fluctuations in wetland numbers) and intraspecific interactions (density dependence) in driving population dynamics. We tested whether populations living in stable environments (in northern Europe) were more strongly governed by density dependence than populations living in variable environments (in North America). We also addressed whether relative population dynamical responses to environmental variability versus density corresponded to differences in life history strategies between dabbling (relatively “fast species” and governed by environmental variability) and diving (relatively “slow species” and governed by density) ducks. As expected, the variance component of population fluctuations caused by changes in breeding environments was greater in North America than in Europe. Contrary to expectations, however, populations in more stable environments were not less variable nor clearly more strongly density dependent than populations in highly variable environments. Also, contrary to expectations, populations of diving ducks were neither more stable nor stronger density dependent than populations of dabbling ducks, and the effect of environmental variability on population dynamics was greater in diving than in dabbling ducks. In general, irrespective of continent and species life history, environmental variability contributed more to variation in species abundances than did density. Our findings underscore the need for more studies on populations of the same species in different environments to verify the generality of current explanations about population dynamics and its association with species life history.

  12. Environmental variability and population dynamics: do European and North American ducks play by the same rules?

    PubMed

    Pöysä, Hannu; Rintala, Jukka; Johnson, Douglas H; Kauppinen, Jukka; Lammi, Esa; Nudds, Thomas D; Väänänen, Veli-Matti

    2016-10-01

    Density dependence, population regulation, and variability in population size are fundamental population processes, the manifestation and interrelationships of which are affected by environmental variability. However, there are surprisingly few empirical studies that distinguish the effect of environmental variability from the effects of population processes. We took advantage of a unique system, in which populations of the same duck species or close ecological counterparts live in highly variable (north American prairies) and in stable (north European lakes) environments, to distinguish the relative contributions of environmental variability (measured as between-year fluctuations in wetland numbers) and intraspecific interactions (density dependence) in driving population dynamics. We tested whether populations living in stable environments (in northern Europe) were more strongly governed by density dependence than populations living in variable environments (in North America). We also addressed whether relative population dynamical responses to environmental variability versus density corresponded to differences in life history strategies between dabbling (relatively "fast species" and governed by environmental variability) and diving (relatively "slow species" and governed by density) ducks. As expected, the variance component of population fluctuations caused by changes in breeding environments was greater in North America than in Europe. Contrary to expectations, however, populations in more stable environments were not less variable nor clearly more strongly density dependent than populations in highly variable environments. Also, contrary to expectations, populations of diving ducks were neither more stable nor stronger density dependent than populations of dabbling ducks, and the effect of environmental variability on population dynamics was greater in diving than in dabbling ducks. In general, irrespective of continent and species life history, environmental variability contributed more to variation in species abundances than did density. Our findings underscore the need for more studies on populations of the same species in different environments to verify the generality of current explanations about population dynamics and its association with species life history.

  13. Study protocol: a dose-escalating, phase-2 study of oral lisdexamfetamine in adults with methamphetamine dependence.

    PubMed

    Ezard, Nadine; Dunlop, Adrian; Clifford, Brendan; Bruno, Raimondo; Carr, Andrew; Bissaker, Alexandra; Lintzeris, Nicholas

    2016-12-01

    The treatment of methamphetamine dependence is a continuing global health problem. Agonist type pharmacotherapies have been used successfully to treat opioid and nicotine dependence and are being studied for the treatment of methamphetamine dependence. One potential candidate is lisdexamfetamine, a pro-drug for dexamphetamine, which has a longer lasting therapeutic action with a lowered abuse potential. The purpose of this study is to determine the safety of lisdexamfetamine in this population at doses higher than those currently approved for attention deficit hyperactivity disorder or binge eating disorder. This is a phase 2 dose escalation study of lisdexamfetamine for the treatment of methamphetamine dependence. Twenty individuals seeking treatment for methamphetamine dependence will be recruited at two Australian drug and alcohol services. All participants will undergo a single-blinded ascending-descending dose regime of 100 to 250 mg lisdexamfetamine, dispensed daily on site, over an 8-week period. Participants will be offered counselling as standard care. For the primary objectives the outcome variables will be adverse events monitoring, drug tolerability and regimen completion. Secondary outcomes will be changes in methamphetamine use, craving, withdrawal, severity of dependence, risk behaviour and other substance use. Medication acceptability, potential for non-prescription use, adherence and changes in neurocognition will also be measured. Determining the safety of lisdexamfetamine will enable further research to develop pharmacotherapies for the treatment of methamphetamine dependence. Australian and New Zealand Clinical Trials Registry ACTRN12615000391572 Registered 28 th April 2015.

  14. Density dependence in demography and dispersal generates fluctuating invasion speeds

    PubMed Central

    Li, Bingtuan; Miller, Tom E. X.

    2017-01-01

    Density dependence plays an important role in population regulation and is known to generate temporal fluctuations in population density. However, the ways in which density dependence affects spatial population processes, such as species invasions, are less understood. Although classical ecological theory suggests that invasions should advance at a constant speed, empirical work is illuminating the highly variable nature of biological invasions, which often exhibit nonconstant spreading speeds, even in simple, controlled settings. Here, we explore endogenous density dependence as a mechanism for inducing variability in biological invasions with a set of population models that incorporate density dependence in demographic and dispersal parameters. We show that density dependence in demography at low population densities—i.e., an Allee effect—combined with spatiotemporal variability in population density behind the invasion front can produce fluctuations in spreading speed. The density fluctuations behind the front can arise from either overcompensatory population growth or density-dependent dispersal, both of which are common in nature. Our results show that simple rules can generate complex spread dynamics and highlight a source of variability in biological invasions that may aid in ecological forecasting. PMID:28442569

  15. A multi-level analysis of counselor attitudes toward the use of buprenorphine in substance abuse treatment.

    PubMed

    Rieckmann, Traci R; Kovas, Anne E; McFarland, Bentson H; Abraham, Amanda J

    2011-12-01

    Despite evidence that buprenorphine is effective and safe and offers greater access as compared with methadone, implementation for treatment of opiate dependence continues to be weak. Research indicates that legal and regulatory factors, state policies, and organizational and provider variables affect adoption of buprenorphine. This study uses hierarchical linear modeling to examine National Treatment Center Study data to identify counselor characteristics (attitudes, training, and beliefs) and organizational factors (accreditation, caseload, access to buprenorphine, and other evidence-based practices) that influence implementation of buprenorphine for treatment of opiate dependence. Analyses showed that provider training about buprenorphine, higher prevalence of opiate-dependent clients, and less treatment program emphasis on a 12-step model predicted greater counselor acceptance and perceived effectiveness of buprenorphine. Results also indicate that program use of buprenorphine for any treatment purpose (detoxification, maintenance, and/or pain management) and time (calendar year in data collection) was associated with increased diffusion of knowledge about buprenorphine among counselors and with more favorable counselor attitudes toward buprenorphine. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Counselor Attitudes toward the Use of Buprenorphine in Substance Abuse Treatment: A Multi-level Modeling Approach

    PubMed Central

    Kovas, Anne E.; McFarland, Bentson H.; Abraham, Amanda J.

    2012-01-01

    In spite of evidence that buprenorphine is effective, safe, and offers greater access as compared with methadone, implementation for treatment of opiate dependence continues to be weak. Research indicates that legal and regulatory factors, state policies, and organizational and provider variables affect adoption of buprenorphine. This study uses hierarchical linear modeling (HLM) to examine National Treatment Center Study (NTCS) data to identify counselor characteristics (attitudes, training, beliefs) and organizational factors (accreditation, caseload, access to buprenorphine and other evidence-based practices) that influence implementation of buprenorphine for treatment of opiate dependence. Analyses showed that provider training about buprenorphine, higher prevalence of opiate dependent clients, and less treatment program emphasis on a 12-step model predicted greater counselor acceptance and perceived effectiveness of buprenorphine. Results also indicate that program use of buprenorphine for any treatment purpose (detoxification, maintenance, and/or pain management) and time (calendar year in data collection) were associated with increased diffusion of knowledge about buprenorphine among counselors and with more favorable counselor attitudes toward buprenorphine. PMID:21821379

  17. A Computer Program for Preliminary Data Analysis

    Treesearch

    Dennis L. Schweitzer

    1967-01-01

    ABSTRACT. -- A computer program written in FORTRAN has been designed to summarize data. Class frequencies, means, and standard deviations are printed for as many as 100 independent variables. Cross-classifications of an observed dependent variable and of a dependent variable predicted by a multiple regression equation can also be generated.

  18. Interpreting incremental value of markers added to risk prediction models.

    PubMed

    Pencina, Michael J; D'Agostino, Ralph B; Pencina, Karol M; Janssens, A Cecile J W; Greenland, Philip

    2012-09-15

    The discrimination of a risk prediction model measures that model's ability to distinguish between subjects with and without events. The area under the receiver operating characteristic curve (AUC) is a popular measure of discrimination. However, the AUC has recently been criticized for its insensitivity in model comparisons in which the baseline model has performed well. Thus, 2 other measures have been proposed to capture improvement in discrimination for nested models: the integrated discrimination improvement and the continuous net reclassification improvement. In the present study, the authors use mathematical relations and numerical simulations to quantify the improvement in discrimination offered by candidate markers of different strengths as measured by their effect sizes. They demonstrate that the increase in the AUC depends on the strength of the baseline model, which is true to a lesser degree for the integrated discrimination improvement. On the other hand, the continuous net reclassification improvement depends only on the effect size of the candidate variable and its correlation with other predictors. These measures are illustrated using the Framingham model for incident atrial fibrillation. The authors conclude that the increase in the AUC, integrated discrimination improvement, and net reclassification improvement offer complementary information and thus recommend reporting all 3 alongside measures characterizing the performance of the final model.

  19. Lateral vegetation growth rates exert control on coastal foredune hummockiness and coalescing time

    NASA Astrophysics Data System (ADS)

    Goldstein, Evan B.; Moore, Laura J.; Durán Vinent, Orencio

    2017-08-01

    Coastal foredunes form along sandy, low-sloped coastlines and range in shape from continuous dune ridges to hummocky features, which are characterized by alongshore-variable dune crest elevations. Initially scattered dune-building plants and species that grow slowly in the lateral direction have been implicated as a cause of foredune hummockiness. Our goal in this work is to explore how the initial configuration of vegetation and vegetation growth characteristics control the development of hummocky coastal dunes including the maximum hummockiness of a given dune field. We find that given sufficient time and absent external forcing, hummocky foredunes coalesce to form continuous dune ridges. Model results yield a predictive rule for the timescale of coalescing and the height of the coalesced dune that depends on initial plant dispersal and two parameters that control the lateral and vertical growth of vegetation, respectively. Our findings agree with previous observational and conceptual work - whether or not hummockiness will be maintained depends on the timescale of coalescing relative to the recurrence interval of high-water events that reset dune building in low areas between hummocks. Additionally, our model reproduces the observed tendency for foredunes to be hummocky along the southeast coast of the US where lateral vegetation growth rates are slower and thus coalescing times are likely longer.

  20. Problem decomposition by mutual information and force-based clustering

    NASA Astrophysics Data System (ADS)

    Otero, Richard Edward

    The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter- dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.

  1. Refractoriness in Sustained Visuo-Manual Control: Is the Refractory Duration Intrinsic or Does It Depend on External System Properties?

    PubMed Central

    van de Kamp, Cornelis; Gawthrop, Peter J.; Gollee, Henrik; Loram, Ian D.

    2013-01-01

    Researchers have previously adopted the double stimulus paradigm to study refractoriness in human neuromotor control. Currently, refractoriness, such as the Psychological Refractory Period (PRP) has only been quantified in discrete movement conditions. Whether refractoriness and the associated serial ballistic hypothesis generalises to sustained control tasks has remained open for more than sixty years. Recently, a method of analysis has been presented that quantifies refractoriness in sustained control tasks and discriminates intermittent (serial ballistic) from continuous control. Following our recent demonstration that continuous control of an unstable second order system (i.e. balancing a ‘virtual’ inverted pendulum through a joystick interface) is unnecessary, we ask whether refractoriness of substantial duration (∼200 ms) is evident in sustained visual-manual control of external systems. We ask whether the refractory duration (i) is physiologically intrinsic, (ii) depends upon system properties like the order (0, 1st, and 2nd) or stability, (iii) depends upon target jump direction (reversal, same direction). Thirteen participants used discrete movements (zero order system) as well as more sustained control activity (1st and 2nd order systems) to track unpredictable step-sequence targets. Results show a substantial refractory duration that depends upon system order (250, 350 and 550 ms for 0, 1st and 2nd order respectively, n = 13, p<0.05), but not stability. In sustained control refractoriness was only found when the target reverses direction. In the presence of time varying actuators, systems and constraints, we propose that central refractoriness is an appropriate control mechanism for accommodating online optimization delays within the neural circuitry including the more variable processing times of higher order (complex) input-output relations. PMID:23300430

  2. Regulation in work and decision-making in the activity of public prosecutors in Santa Catarina, Brazil.

    PubMed

    de Azevedo, Beatriz Marcondes; Cruz, Roberto Moraes

    2012-01-01

    Was to characterize the relationship between regulation at work and decision processes in the activity of Prosecutors in SC. To this end, it starts with the assumption that the decision-making and regulation are complex phenomena of conduct at work, since the worker makes continuously micro and macro decisions, based on a set of regulations, influenced by contingency and personal variables. Four Prosecutors participated in this study. This was a case study, descriptive and exploratory. For data collection, documents were analyzed, observing the workplace and interviewed key personnel of the institution in order to identify macro and micro organizational factors. Also as a technique for data collection an Ergonomic Analysis of Work. It was found that the work of the Prosecutor presents a set of activities that take place on the basis of coordination and cooperation in a dynamic and unstable environment. The prosecutor's activity, in addition to being the full expression of basic psychological processes of service work, is embedded in a context which, in part, depends and, therefore, encourages and requires choices and referrals by employees, demanding the demonstration of skills and modulating its operative mode. Processing depends on the idiosyncrasies and the force of circumstances, thus creating a brand, a unique personal style in the work. It is inferred that they are dialectical processes, since they regulate to decide and decide because they are regulated. However, the regular employee builds micro decisions that subsidize an effective decision. Thus, the better the variability of regulation, the greater the variability of decisions.

  3. Concurrent generation of multivariate mixed data with variables of dissimilar types.

    PubMed

    Amatya, Anup; Demirtas, Hakan

    2016-01-01

    Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.

  4. Design and dynamic simulation of a fixed pitch 56 kW wind turbine drive train with a continuously variable transmission

    NASA Technical Reports Server (NTRS)

    Gallo, C.; Kasuba, R.; Pintz, A.; Spring, J.

    1986-01-01

    The dynamic analysis of a horizontal axis fixed pitch wind turbine generator (WTG) rated at 56 kW is discussed. A mechanical Continuously Variable Transmission (CVT) was incorporated in the drive train to provide variable speed operation capability. One goal of the dynamic analysis was to determine if variable speed operation, by means of a mechanical CVT, is capable of capturing the transient power in the WTG/wind environment. Another goal was to determine the extent of power regulation possible with CVT operation.

  5. Computer Simulations and Literature Survey of Continuously Variable Transmissions for Use in Buses

    DOT National Transportation Integrated Search

    1981-12-01

    Numerous studies have been conducted on the concept of flywheel energy storage for buses. Flywheel systems require a continuously variable transmission (CVT) of some type to transmit power between the flywheel and the drive wheels. However, a CVT can...

  6. Implementation of continuous-variable quantum key distribution with composable and one-sided-device-independent security against coherent attacks.

    PubMed

    Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F; Schnabel, Roman

    2015-10-30

    Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein-Podolsky-Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components.

  7. Implementation of continuous-variable quantum key distribution with composable and one-sided-device-independent security against coherent attacks

    PubMed Central

    Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F.; Schnabel, Roman

    2015-01-01

    Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein–Podolsky–Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components. PMID:26514280

  8. Contrasting gender differences on two measures of exercise dependence.

    PubMed

    Weik, M; Hale, B D

    2009-03-01

    Recent studies using multidimensional measures have shown that men (Exercise Dependence Scale; EDS-R) are more exercise-dependent than women, whereas others have found that women (Exercise Dependence Questionnaire; EDQ) are more exercise-dependent than men. This study investigated whether there may be sex differences in exercise dependence or whether the questionnaires may be measuring different dimensions of exercise dependence. Regular exercisers voluntarily completed the EDS-R, EDQ and Drive for Thinness (DFT) subscale before or after a workout. A local health club in the eastern USA. Male (n = 102) and female (n = 102) exercisers completed the three questionnaires, but 11 participants (1 man, 10 women) were excluded from further analysis because scores indicated possible secondary exercise dependence (eating disorder). Eight subscales of the EDQ, seven subscales of the EDS, the DFT subscale, and several demographic variables served as dependent measures. A multivariate analysis of variance (MANOVA) on the EDS-R showed that men were significantly higher than women on the Withdrawal, Continuance, Tolerance, Lack of Control, Time, and Intention Effect subscales. Another MANOVA on the EDQ indicated that women scored significantly higher than did men on the Interference, Positive Rewards, Withdrawal, and Social Reasons subscales. Statistical analysis using t tests revealed that men had significantly higher total EDS-R scores than women, but women had significantly higher EDQ and DFT scores. These results suggest that both questionnaires measure different aspects of exercise dependence that favour either gender. It remains for further research to determine whether these instruments are equally viable for measurement of ED in both men and women.

  9. Dissociable Effects of Cocaine Dependence on Reward Processes: The Role of Acute Cocaine and Craving.

    PubMed

    Rose, Emma Jane; Salmeron, Betty Jo; Ross, Thomas J; Waltz, James; Schweitzer, Julie B; Stein, Elliot A

    2017-02-01

    The relative impact of chronic vs acute cocaine on dependence-related variability in reward processing in cocaine-dependent individuals (CD) is not well understood, despite the relevance of such effects to long-term outcomes. To dissociate these effects, CD (N=15) and healthy controls (HC; N=15) underwent MRI two times while performing a monetary incentive delay task. Both scans were identical across subjects/groups, except that, in a single-blind, counterbalanced design, CD received intravenous cocaine (30 mg/70 kg) before one session (CD+cocaine) and saline in another (CD+saline). Imaging analyses focused on activity related to anticipatory valence (gain/loss), anticipatory magnitude (small/medium/large), and reinforcing outcomes (successful/unsuccessful). Drug condition (cocaine vs saline) and group (HC vs CD+cocaine or CD+saline) did not influence valence-related activity. However, compared with HC, magnitude-related activity for gains was reduced in CD in the left cingulate gyrus post-cocaine and in the left putamen in the abstinence/saline condition. In contrast, magnitude-dependent activity for losses increased in CD vs HC in the right inferior parietal lobe post-cocaine and in the left superior frontal gyrus post-saline. Across outcomes (ie, successful and unsuccessful) activity in the right habenula decreased in CD in the abstinence/saline condition vs acute cocaine and HC. Cocaine-dependent variability in outcome- and loss-magnitude activity correlated negatively with ratings of cocaine craving and positively with how high subjects felt during the scanning session. Collectively, these data suggest dissociable effects of acute cocaine on non-drug reward processes, with cocaine consumption partially ameliorating dependence-related insensitivity to reinforcing outcomes/'liking', but having no discernible effect on dependence-related alterations in incentive salience/'wanting'. The relationship of drug-related affective sequelae to non-drug reward processing suggests that CD experience a general alteration of reward function and may be motivated to continue using cocaine for reasons beyond desired drug-related effects. This may have implications for individual differences in treatment efficacy for approaches that rely on reinforcement strategies (eg, contingency management) and for the long-term success of treatment.

  10. Intervention to improve social and family support for caregivers of dependent patients: ICIAS study protocol.

    PubMed

    Rosell-Murphy, Magdalena; Bonet-Simó, Josep M; Baena, Esther; Prieto, Gemma; Bellerino, Eva; Solé, Francesc; Rubio, Montserrat; Krier, Ilona; Torres, Pascuala; Mimoso, Sonia

    2014-03-25

    Despite the existence of formal professional support services, informal support (mainly family members) continues to be the main source of eldercare, especially for those who are dependent or disabled. Professionals on the primary health care are the ideal choice to educate, provide psychological support, and help to mobilize social resources available to the informal caregiver.Controversy remains concerning the efficiency of multiple interventions, taking a holistic approach to both the patient and caregiver, and optimum utilization of the available community resources. .For this reason our goal is to assess whether an intervention designed to improve the social support for caregivers effectively decreases caregivers burden and improves their quality of life. CONTROLled, multicentre, community intervention trial, with patients and their caregivers randomized to the intervention or control group according to their assigned Primary Health Care Team (PHCT). Primary Health Care network (9 PHCTs). Primary informal caregivers of patients receiving home health care from participating PHCTs. Required sample size is 282 caregivers (141 from PHCTs randomized to the intervention group and 141 from PHCTs randomized to the control group. a) PHCT professionals: standardized training to implement caregivers intervention. b) Caregivers: 1 individualized counselling session, 1 family session, and 4 educational group sessions conducted by participating PHCT professionals; in addition to usual home health care visits, periodic telephone follow-up contact and unlimited telephone support. Caregivers and dependent patients: usual home health care, consisting of bimonthly scheduled visits, follow-up as needed, and additional attention upon request.Data analysisDependent variables: Caregiver burden (short-form Zarit test), caregivers' social support (Medical Outcomes Study), and caregivers' reported quality of life (SF-12)INDEPENDENT VARIABLES: a) Caregiver: sociodemographic data, Goldberg Scale, Apgar family questionnaire, Holmes and Rahe Psychosocial Stress Scale, number of chronic diseases. b) Dependent patient: sociodemographic data, level of dependency (Barthel Index), cognitive impairment (Pfeiffer test). If the intervention intended to improve social and family support is effective in reducing the burden on primary informal caregivers of dependent patients, this model can be readily applied throughout usual PHCT clinical practice. Clinical trials registrar: NCT02065427.

  11. Intervention to improve social and family support for caregivers of dependent patients: ICIAS study protocol

    PubMed Central

    2014-01-01

    Background Despite the existence of formal professional support services, informal support (mainly family members) continues to be the main source of eldercare, especially for those who are dependent or disabled. Professionals on the primary health care are the ideal choice to educate, provide psychological support, and help to mobilize social resources available to the informal caregiver. Controversy remains concerning the efficiency of multiple interventions, taking a holistic approach to both the patient and caregiver, and optimum utilization of the available community resources. .For this reason our goal is to assess whether an intervention designed to improve the social support for caregivers effectively decreases caregivers burden and improves their quality of life. Methods/design Design: Controlled, multicentre, community intervention trial, with patients and their caregivers randomized to the intervention or control group according to their assigned Primary Health Care Team (PHCT). Study area: Primary Health Care network (9 PHCTs). Study participants: Primary informal caregivers of patients receiving home health care from participating PHCTs. Sample: Required sample size is 282 caregivers (141 from PHCTs randomized to the intervention group and 141 from PHCTs randomized to the control group. Intervention: a) PHCT professionals: standardized training to implement caregivers intervention. b) Caregivers: 1 individualized counselling session, 1 family session, and 4 educational group sessions conducted by participating PHCT professionals; in addition to usual home health care visits, periodic telephone follow-up contact and unlimited telephone support. Control: Caregivers and dependent patients: usual home health care, consisting of bimonthly scheduled visits, follow-up as needed, and additional attention upon request. Data analysis Dependent variables: Caregiver burden (short-form Zarit test), caregivers’ social support (Medical Outcomes Study), and caregivers’ reported quality of life (SF-12) Independent variables: a) Caregiver: sociodemographic data, Goldberg Scale, Apgar family questionnaire, Holmes and Rahe Psychosocial Stress Scale, number of chronic diseases. b) Dependent patient: sociodemographic data, level of dependency (Barthel Index), cognitive impairment (Pfeiffer test). Discussion If the intervention intended to improve social and family support is effective in reducing the burden on primary informal caregivers of dependent patients, this model can be readily applied throughout usual PHCT clinical practice. Trial registration Clinical trials registrar: NCT02065427 PMID:24666438

  12. Low-Dimensional Models of "Neuro-Glio-Vascular Unit" for Describing Neural Dynamics under Normal and Energy-Starved Conditions.

    PubMed

    Chhabria, Karishma; Chakravarthy, V Srinivasa

    2016-01-01

    The motivation of developing simple minimal models for neuro-glio-vascular (NGV) system arises from a recent modeling study elucidating the bidirectional information flow within the NGV system having 89 dynamic equations (1). While this was one of the first attempts at formulating a comprehensive model for neuro-glio-vascular system, it poses severe restrictions in scaling up to network levels. On the contrary, low--dimensional models are convenient devices in simulating large networks that also provide an intuitive understanding of the complex interactions occurring within the NGV system. The key idea underlying the proposed models is to describe the glio-vascular system as a lumped system, which takes neural firing rate as input and returns an "energy" variable (analogous to ATP) as output. To this end, we present two models: biophysical neuro-energy (Model 1 with five variables), comprising KATP channel activity governed by neuronal ATP dynamics, and the dynamic threshold (Model 2 with three variables), depicting the dependence of neural firing threshold on the ATP dynamics. Both the models show different firing regimes, such as continuous spiking, phasic, and tonic bursting depending on the ATP production coefficient, ɛp, and external current. We then demonstrate that in a network comprising such energy-dependent neuron units, ɛp could modulate the local field potential (LFP) frequency and amplitude. Interestingly, low-frequency LFP dominates under low ɛp conditions, which is thought to be reminiscent of seizure-like activity observed in epilepsy. The proposed "neuron-energy" unit may be implemented in building models of NGV networks to simulate data obtained from multimodal neuroimaging systems, such as functional near infrared spectroscopy coupled to electroencephalogram and functional magnetic resonance imaging coupled to electroencephalogram. Such models could also provide a theoretical basis for devising optimal neurorehabilitation strategies, such as non-invasive brain stimulation for stroke patients.

  13. Biostatistics Series Module 10: Brief Overview of Multivariate Methods.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2017-01-01

    Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.

  14. Different operational meanings of continuous variable Gaussian entanglement criteria and Bell inequalities

    NASA Astrophysics Data System (ADS)

    Buono, D.; Nocerino, G.; Solimeno, S.; Porzio, A.

    2014-07-01

    Entanglement, one of the most intriguing aspects of quantum mechanics, marks itself into different features of quantum states. For this reason different criteria can be used for verifying entanglement. In this paper we review some of the entanglement criteria casted for continuous variable states and link them to peculiar aspects of the original debate on the famous Einstein-Podolsky-Rosen (EPR) paradox. We also provide a useful expression for valuating Bell-type non-locality on Gaussian states. We also present the experimental measurement of a particular realization of the Bell operator over continuous variable entangled states produced by a sub-threshold type-II optical parametric oscillators (OPOs).

  15. Practical limitation for continuous-variable quantum cryptography using coherent States.

    PubMed

    Namiki, Ryo; Hirano, Takuya

    2004-03-19

    In this Letter, first, we investigate the security of a continuous-variable quantum cryptographic scheme with a postselection process against individual beam splitting attack. It is shown that the scheme can be secure in the presence of the transmission loss owing to the postselection. Second, we provide a loss limit for continuous-variable quantum cryptography using coherent states taking into account excess Gaussian noise on quadrature distribution. Since the excess noise is reduced by the loss mechanism, a realistic intercept-resend attack which makes a Gaussian mixture of coherent states gives a loss limit in the presence of any excess Gaussian noise.

  16. Universal Quantum Computing with Measurement-Induced Continuous-Variable Gate Sequence in a Loop-Based Architecture.

    PubMed

    Takeda, Shuntaro; Furusawa, Akira

    2017-09-22

    We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.

  17. Universal Quantum Computing with Measurement-Induced Continuous-Variable Gate Sequence in a Loop-Based Architecture

    NASA Astrophysics Data System (ADS)

    Takeda, Shuntaro; Furusawa, Akira

    2017-09-01

    We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.

  18. Generation of 8.3 dB continuous variable quantum entanglement at a telecommunication wavelength of 1550 nm

    NASA Astrophysics Data System (ADS)

    Jinxia, Feng; Zhenju, Wan; Yuanji, Li; Kuanshou, Zhang

    2018-01-01

    Continuous variable quantum entanglement at a telecommunication wavelength of 1550 nm is experimentally generated using a single nondegenerate optical parametric amplifier based on a type-II periodically poled KTiOPO4 crystal. The triply resonant of the nondegenerate optical parametric amplifier is adjusted by tuning the crystal temperature and tilting the orientation of the crystal in the optical cavity. Einstein-Podolsky-Rosen-entangled beams with quantum correlations of 8.3 dB for both the amplitude and phase quadratures are experimentally generated. This system can be used for continuous variable fibre-based quantum communication.

  19. Broadband continuous-variable entanglement source using a chirped poling nonlinear crystal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, J. S.; Sun, L.; Yu, X. Q.

    2010-01-15

    Aperiodically poled nonlinear crystal can be used as a broadband continuous-variable entanglement source and has strong stability under perturbations. We study the conversion dynamics of the sum-frequency generation and the quantum correlation of the two pump fields in a chirped-structure nonlinear crystal using the quantum stochastic method. The results show that there exists a frequency window for the pumps where two optical fields can perform efficient upconversion. The two pump fields are demonstrated to be entangled in the window and the chirped-structure crystal can be used as a continuous-variable entanglement source with a broad response bandwidth.

  20. Variability of a "force signature" during windmill softball pitching and relationship between discrete force variables and pitch velocity.

    PubMed

    Nimphius, Sophia; McGuigan, Michael R; Suchomel, Timothy J; Newton, Robert U

    2016-06-01

    This study assessed reliability of discrete ground reaction force (GRF) variables over multiple pitching trials, investigated the relationships between discrete GRF variables and pitch velocity (PV) and assessed the variability of the "force signature" or continuous force-time curve during the pitching motion of windmill softball pitchers. Intraclass correlation coefficient (ICC) for all discrete variables was high (0.86-0.99) while the coefficient of variance (CV) was low (1.4-5.2%). Two discrete variables were significantly correlated to PV; second vertical peak force (r(5)=0.81, p=0.03) and time between peak forces (r(5)=-0.79; p=0.03). High ICCs and low CVs support the reliability of discrete GRF and PV variables over multiple trials and significant correlations indicate there is a relationship between the ability to produce force and the timing of this force production with PV. The mean of all pitchers' curve-average standard deviation of their continuous force-time curves demonstrated low variability (CV=4.4%) indicating a repeatable and identifiable "force signature" pattern during this motion. As such, the continuous force-time curve in addition to discrete GRF variables should be examined in future research as a potential method to monitor or explain changes in pitching performance. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Health-related quality of life in European women following myocardial infarction: a cross-sectional study.

    PubMed

    Lidell, Evy; Höfer, Stefan; Saner, Hugo; Perk, Joep; Hildingh, Cathrine; Oldridge, Neil

    2015-08-01

    Coronary heart disease is a major contributor to women's health problems. Self-perceived social support, well-being and health-related quality of life (HRQL) were documented in the cross-sectional HeartQoL survey of European women one and six months after a myocardial infarction. European women were recruited in 18 European countries and grouped into four geographical regions (Southern Europe, Northern Europe, Western Europe and Eastern Europe). Continuous socio-demographic variables and categorical variables were compared by age and region with ANOVA and χ(2), respectively; multiple regression models were used to identify predictors of social support, well-being and HRQL. Women living in the Eastern European region rated social support, well-being and HRQL significantly lower than women in the other regions. Older women had lower physical HRQL scores than younger women. Eastern European women rated social support, well-being and HRQL significantly lower than women in the other regions. Prediction of the dependent variables (social support, well-being and HRQL) by socio-demographic factors varied by total group, in the older age group, and by region; body mass index and managerial responsibility were the most consistent significant predictors. © The European Society of Cardiology 2014.

  2. Bayesian networks in neuroscience: a survey.

    PubMed

    Bielza, Concha; Larrañaga, Pedro

    2014-01-01

    Bayesian networks are a type of probabilistic graphical models lie at the intersection between statistics and machine learning. They have been shown to be powerful tools to encode dependence relationships among the variables of a domain under uncertainty. Thanks to their generality, Bayesian networks can accommodate continuous and discrete variables, as well as temporal processes. In this paper we review Bayesian networks and how they can be learned automatically from data by means of structure learning algorithms. Also, we examine how a user can take advantage of these networks for reasoning by exact or approximate inference algorithms that propagate the given evidence through the graphical structure. Despite their applicability in many fields, they have been little used in neuroscience, where they have focused on specific problems, like functional connectivity analysis from neuroimaging data. Here we survey key research in neuroscience where Bayesian networks have been used with different aims: discover associations between variables, perform probabilistic reasoning over the model, and classify new observations with and without supervision. The networks are learned from data of any kind-morphological, electrophysiological, -omics and neuroimaging-, thereby broadening the scope-molecular, cellular, structural, functional, cognitive and medical- of the brain aspects to be studied.

  3. Indo-Pacific ENSO modes in a double-basin Zebiak-Cane model

    NASA Astrophysics Data System (ADS)

    Wieners, Claudia; de Ruijter, Will; Dijkstra, Henk

    2016-04-01

    We study Indo-Pacific interactions on ENSO timescales in a double-basin version of the Zebiak-Cane ENSO model, employing both time integrations and bifurcation analysis (continuation methods). The model contains two oceans (the Indian and Pacific Ocean) separated by a meridional wall. Interaction between the basins is possible via the atmosphere overlaying both basins. We focus on the effect of the Indian Ocean (both its mean state and its variability) on ENSO stability. In addition, inspired by analysis of observational data (Wieners et al, Coherent tropical Indo-Pacific interannual climate variability, in review), we investigate the effect of state-dependent atmospheric noise. Preliminary results include the following: 1) The background state of the Indian Ocean stabilises the Pacific ENSO (i.e. the Hopf bifurcation is shifted to higher values of the SST-atmosphere coupling), 2) the West Pacific cooling (warming) co-occurring with El Niño (La Niña) is essential to simulate the phase relations between Pacific and Indian SST anomalies, 3) a non-linear atmosphere is needed to simulate the effect of the Indian Ocean variability onto the Pacific ENSO that is suggested by observations.

  4. Bayesian networks in neuroscience: a survey

    PubMed Central

    Bielza, Concha; Larrañaga, Pedro

    2014-01-01

    Bayesian networks are a type of probabilistic graphical models lie at the intersection between statistics and machine learning. They have been shown to be powerful tools to encode dependence relationships among the variables of a domain under uncertainty. Thanks to their generality, Bayesian networks can accommodate continuous and discrete variables, as well as temporal processes. In this paper we review Bayesian networks and how they can be learned automatically from data by means of structure learning algorithms. Also, we examine how a user can take advantage of these networks for reasoning by exact or approximate inference algorithms that propagate the given evidence through the graphical structure. Despite their applicability in many fields, they have been little used in neuroscience, where they have focused on specific problems, like functional connectivity analysis from neuroimaging data. Here we survey key research in neuroscience where Bayesian networks have been used with different aims: discover associations between variables, perform probabilistic reasoning over the model, and classify new observations with and without supervision. The networks are learned from data of any kind–morphological, electrophysiological, -omics and neuroimaging–, thereby broadening the scope–molecular, cellular, structural, functional, cognitive and medical– of the brain aspects to be studied. PMID:25360109

  5. Perceptions of control in adults with epilepsy.

    PubMed

    Gehlert, S

    1994-01-01

    That psychosocial problems are extant in epilepsy is evidenced by a suicide rate among epileptic persons five times that of the general population and an unemployment rate estimated to be more than twice that of the population as a whole. External perceptions of control secondary to repeated episodes of seizure activity that generalize to the social sphere have been implicated as causes of these problems. The hypothesis that individuals who continue to have seizures become more and more external in perceptions of control was tested by a survey mailed to a sample of individuals with epilepsy in a metropolitan area of the Midwest. Dependent variables were, scores on instruments measuring locus of control and attributional style. The independent variable was a measure of seizure control based on present age, age at onset, and length of time since last seizure. Gender, socioeconomic status, and certain parenting characteristics were included as control variables, as they are also known to affect perceptions of control. Analysis by multiple regression techniques supported the study's hypothesis when perceptions of control was conceptualized as learned helplessness for bad, but not for good, events. The hypothesis was not confirmed when perceptions of control was conceptualized as either general or health locus of control.

  6. Impacts of Changing Climate on Agricultural Variability: Implications for Smallholder Farmers in India

    NASA Astrophysics Data System (ADS)

    Mondal, P.; Jain, M.; DeFries, R. S.; Galford, G. L.; Small, C.

    2013-12-01

    Agriculture is the largest employment sector in India, where food productivity, and thus food security, is highly dependent on seasonal rainfall and temperature. Projected increase in temperature, along with less frequent but intense rainfall events, will have a negative impact on crop productivity in India in the coming decades. These changes, along with continued ground water depletion, could have serious implications for Indian smallholder farmers, who are among some of the most vulnerable communities to climatic and economic changes. Hence baseline information on agricultural sensitivity to climate variability is important for strategies and policies that promote adaptation to climate variability. This study examines how cropping patterns in different agro-ecological zones in India respond to variations in precipitation and temperature. We specifically examine: a) which climate variables most influence crop cover for monsoon and winter crops? and b) how does the sensitivity of crop cover to climate variability vary in different agro-ecological regions with diverse socio-economic factors? We use remote sensing data (2000-01 - 2012-13) for cropping patterns (developed using MODIS satellite data), climate parameters (derived from MODIS and TRMM satellite data) and agricultural census data. We initially assessed the importance of these climate variables in two agro-ecoregions: a predominantly groundwater irrigated, cash crop region in western India, and a region in central India primarily comprised of rain-fed or surface water irrigated subsistence crops. Seasonal crop cover anomaly varied between -25% and 25% of the 13-year mean in these two regions. Predominantly climate-dependent region in central India showed high anomalies up to 200% of the 13-year crop cover mean, especially during winter season. Winter daytime mean temperature is overwhelmingly the most important climate variable for winter crops irrespective of the varied biophysical and socio-economic conditions across the study regions. Despite access to groundwater irrigation, crop cover in the western Indian study region showed substantial fluctuations during monsoon, probably due to changing planting strategies. This region is less sensitive to precipitation compared to the central Indian study region with predominantly climate-dependent irrigation from surface water. In western Indian study region a greater number of rainy days, increased intensity of rainfall, and cooler daytime and nighttime temperatures lead to increased crop cover during monsoon season, compared to in the central Indian study region where monsoon timing and amount of total rainfall are the most important factors of crop cover. Our findings indicate that different regions respond differently to climate, since socio-economic factors, such as irrigation access, market influences, demography, and policies play critical role in agricultural production. In the wake of projected precipitation and temperature changes, better access to irrigation and heat-tolerant high-yielding crop varieties will be crucial for future food production.

  7. Influence of Pacing by Periodic Auditory Stimuli on Movement Continuation: Comparison with Self-regulated Periodic Movement

    PubMed Central

    Ito, Masanori; Kado, Naoki; Suzuki, Toshiaki; Ando, Hiroshi

    2013-01-01

    [Purpose] The purpose of this study was to investigate the influence of external pacing with periodic auditory stimuli on the control of periodic movement. [Subjects and Methods] Eighteen healthy subjects performed self-paced, synchronization-continuation, and syncopation-continuation tapping. Inter-onset intervals were 1,000, 2,000 and 5,000 ms. The variability of inter-tap intervals was compared between the different pacing conditions and between self-paced tapping and each continuation phase. [Results] There were no significant differences in the mean and standard deviation of the inter-tap interval between pacing conditions. For the 1,000 and 5,000 ms tasks, there were significant differences in the mean inter-tap interval following auditory pacing compared with self-pacing. For the 2,000 ms syncopation condition and 5,000 ms task, there were significant differences from self-pacing in the standard deviation of the inter-tap interval following auditory pacing. [Conclusion] These results suggest that the accuracy of periodic movement with intervals of 1,000 and 5,000 ms can be improved by the use of auditory pacing. However, the consistency of periodic movement is mainly dependent on the inherent skill of the individual; thus, improvement of consistency based on pacing is unlikely. PMID:24259932

  8. The continuous prisoner's dilemma and the evolution of cooperation through reciprocal altruism with variable investment.

    PubMed

    Killingback, Timothy; Doebeli, Michael

    2002-10-01

    Understanding the evolutionary origin and persistence of cooperative behavior is a fundamental biological problem. The standard "prisoner's dilemma," which is the most widely adopted framework for studying the evolution of cooperation through reciprocal altruism between unrelated individuals, does not allow for varying degrees of cooperation. Here we study the continuous iterated prisoner's dilemma, in which cooperative investments can vary continuously in each round. This game has been previously considered for a class of reactive strategies in which current investments are based on the partner's previous investment. In the standard iterated prisoner's dilemma, such strategies are inferior to strategies that take into account both players' previous moves, as is exemplified by the evolutionary dominance of "Pavlov" over "tit for tat." Consequently, we extend the analysis of the continuous prisoner's dilemma to a class of strategies in which current investments depend on previous payoffs and, hence, on both players' previous investments. We show, both analytically and by simulation, that payoff-based strategies, which embody the intuitively appealing idea that individuals invest more in cooperative interactions when they profit from these interactions, provide a natural explanation for the gradual evolution of cooperation from an initially noncooperative state and for the maintenance of cooperation thereafter.

  9. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  10. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  11. An opinion-driven behavioral dynamics model for addictive behaviors

    DOE PAGES

    Moore, Thomas W.; Finley, Patrick D.; Apelberg, Benjamin J.; ...

    2015-04-08

    We present a model of behavioral dynamics that combines a social network-based opinion dynamics model with behavioral mapping. The behavioral component is discrete and history-dependent to represent situations in which an individual’s behavior is initially driven by opinion and later constrained by physiological or psychological conditions that serve to maintain the behavior. Additionally, individuals are modeled as nodes in a social network connected by directed edges. Parameter sweeps illustrate model behavior and the effects of individual parameters and parameter interactions on model results. Mapping a continuous opinion variable into a discrete behavioral space induces clustering on directed networks. Clusters providemore » targets of opportunity for influencing the network state; however, the smaller the network the greater the stochasticity and potential variability in outcomes. Furthermore, this has implications both for behaviors that are influenced by close relationships verses those influenced by societal norms and for the effectiveness of strategies for influencing those behaviors.« less

  12. Effects of ecological interactions and environmental conditions on community dynamics in an estuarine ecosystem

    NASA Astrophysics Data System (ADS)

    Liu, H.; Minello, T.; Sutton, G.

    2016-02-01

    Coastal marine ecosystems are both productive and vulnerable to human and natural stressors. Examining the relative importance of fishing, environmental variability, and habitat alteration on ecosystem dynamics is challenging. Intensive harvest and habitat loss have resulted in widespread concerns related to declines in fisheries production, but causal mechanisms are rarely clear. In this study, we modeled trophic dynamics in Galveston Bay, Texas, using fishery-independent catch data for blue crab, shrimp, red drum, Atlantic croaker and spotted seatrout along with habitat information collected by the Texas Parks and Wildlife Department during 1984 - 2014. We developed a multispecies state-space model to examine ecological interactions and partition the relative effects of trophic interactions and environmental conditions on the community dynamics. Preliminary results showed the importance of salinity, density-dependence, and trophic interactions. We are continuing to explore these results from a perspective of fish community compensatory responses to exploitation, reflecting both direct and indirect effects of harvesting under the influence of climate variability.

  13. Variability in the management of lithium poisoning.

    PubMed

    Roberts, Darren M; Gosselin, Sophie

    2014-01-01

    Three patterns of lithium poisoning are recognized: acute, acute-on-chronic, and chronic. Intravenous fluids with or without an extracorporeal treatment are the mainstay of treatment; their respective roles may differ depending on the mode of poisoning being treated. Recommendations for treatment selection are available but these are based on a small number of observational studies and their uptake by clinicians is not known. Clinician decision-making in the treatment of four cases of lithium poisoning was assessed at a recent clinical toxicology meeting using an audience response system. Variability in treatment decisions was evident in addition to discordance with published recommendations. Participants did not consistently indicate that hemodialysis was the first-line treatment, instead opting for a conservative approach, and continuous modalities were viewed favorably; this is in contrast to recommendations in some references. The development of multidisciplinary consensus guidelines may improve the management of patients with lithium poisoning but prospective randomized controlled trials are required to more clearly define the role of extracorporeal treatments. © 2014 Wiley Periodicals, Inc.

  14. Optimizing Multi-Product Multi-Constraint Inventory Control Systems with Stochastic Replenishments

    NASA Astrophysics Data System (ADS)

    Allah Taleizadeh, Ata; Aryanezhad, Mir-Bahador; Niaki, Seyed Taghi Akhavan

    Multi-periodic inventory control problems are mainly studied employing two assumptions. The first is the continuous review, where depending on the inventory level orders can happen at any time and the other is the periodic review, where orders can only happen at the beginning of each period. In this study, we relax these assumptions and assume that the periodic replenishments are stochastic in nature. Furthermore, we assume that the periods between two replenishments are independent and identically random variables. For the problem at hand, the decision variables are of integer-type and there are two kinds of space and service level constraints for each product. We develop a model of the problem in which a combination of back-order and lost-sales are considered for the shortages. Then, we show that the model is of an integer-nonlinear-programming type and in order to solve it, a search algorithm can be utilized. We employ a simulated annealing approach and provide a numerical example to demonstrate the applicability of the proposed methodology.

  15. Collective decision dynamics in the presence of external drivers

    NASA Astrophysics Data System (ADS)

    Bassett, Danielle S.; Alderson, David L.; Carlson, Jean M.

    2012-09-01

    We develop a sequence of models describing information transmission and decision dynamics for a network of individual agents subject to multiple sources of influence. Our general framework is set in the context of an impending natural disaster, where individuals, represented by nodes on the network, must decide whether or not to evacuate. Sources of influence include a one-to-many externally driven global broadcast as well as pairwise interactions, across links in the network, in which agents transmit either continuous opinions or binary actions. We consider both uniform and variable threshold rules on the individual opinion as baseline models for decision making. Our results indicate that (1) social networks lead to clustering and cohesive action among individuals, (2) binary information introduces high temporal variability and stagnation, and (3) information transmission over the network can either facilitate or hinder action adoption, depending on the influence of the global broadcast relative to the social network. Our framework highlights the essential role of local interactions between agents in predicting collective behavior of the population as a whole.

  16. Need for cognition moderates paranormal beliefs and magical ideation in inconsistent-handers.

    PubMed

    Prichard, Eric C; Christman, Stephen D

    2016-01-01

    A growing literature suggests that degree of handedness predicts gullibility and magical ideation. Inconsistent-handers (people who use their non-dominant hand for at least one common manual activity) report more magical ideation and are more gullible. The current study tested whether this effect is moderated by need for cognition. One hundred eighteen university students completed questionnaires assessing handedness, self-reported paranormal beliefs, and self-reported need for cognition. Handedness (Inconsistent vs. Consistent Right) and Need for Cognition (High vs. Low) were treated as categorical predictors. Both paranormal beliefs and magical ideation served as dependent variable's in separate analyses. Neither set of tests yielded main effects for handedness or need for cognition. However, there were a significant handedness by need for cognition interactions. Post-hoc comparisons revealed that low, but not high, need for cognition inconsistent-handers reported relatively elevated levels of paranormal belief and magical ideation. A secondary set of tests treating the predictor variables as continuous instead of categorical obtained the same overall pattern.

  17. Effect of hydration and continuous urinary drainage on urine production in children.

    PubMed

    Galetseli, Marianthi; Dimitriou, Panagiotis; Tsapra, Helen; Moustaki, Maria; Nicolaidou, Polyxeni; Fretzayas, Andrew

    2008-01-01

    Although urine production depends on numerous physiological variables there are no quantitative data regarding the effect of bladder decompression, by means of continuous catheter drainage, on urine production. The aim of this study was to investigate this effect. The study was carried out in two stages, each consisting of two phases. The effect of two distinct orally administered amounts of water was recorded in relation to continuous bladder decompression on the changes with time of urine volume and the urine production rate. In the first stage, 35 children were randomly divided into two groups and two different hydration schemes (290 and 580 ml of water/m2) were used. After the second urination of Phase 1, continuous drainage was employed in the phase that followed (Phase 2). In the second stage, a group of 10 children participated and Phase 2 was carried out 1 day after the completion of Phase 1. It was shown that the amount of urine produced increased in accordance with the degree of hydration and doubled or tripled with continual urine drainage by catheter for the same degree of hydration and within the same time interval. This was also true for Stage 2, in which Phase 2 was performed 24 h after Phase 1, indicating that diuresis during Phase 2 (as a result of Phase 1) was negligible. It was shown that during continuous drainage of urine with bladder catheterization there is an increased need for fluids, which should be administered early.

  18. The influence of peer behavior as a function of social and cultural closeness: A meta-analysis of normative influence on adolescent smoking initiation and continuation.

    PubMed

    Liu, Jiaying; Zhao, Siman; Chen, Xi; Falk, Emily; Albarracín, Dolores

    2017-10-01

    Although the influence of peers on adolescent smoking should vary depending on social dynamics, there is a lack of understanding of which elements are most crucial and how this dynamic unfolds for smoking initiation and continuation across areas of the world. The present meta-analysis included 75 studies yielding 237 effect sizes that examined associations between peers' smoking and adolescents' smoking initiation and continuation with longitudinal designs across 16 countries. Mixed-effects models with robust variance estimates were used to calculate weighted-mean Odds ratios. This work showed that having peers who smoke is associated with about twice the odds of adolescents beginning (OR ¯ = 1.96, 95% confidence interval [CI] [1.76, 2.19]) and continuing to smoke (OR ¯ = 1.78, 95% CI [1.55, 2.05]). Moderator analyses revealed that (a) smoking initiation was more positively correlated with peers' smoking when the interpersonal closeness between adolescents and their peers was higher (vs. lower); and (b) both smoking initiation and continuation were more positively correlated with peers' smoking when samples were from collectivistic (vs. individualistic) cultures. Thus, both individual as well as population level dynamics play a critical role in the strength of peer influence. Accounting for cultural variables may be especially important given effects on both initiation and continuation. Implications for theory, research, and antismoking intervention strategies are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Refining the tobacco dependence phenotype using the Wisconsin Inventory of Smoking Dependence Motives (WISDM)

    PubMed Central

    Piper, Megan E.; Bolt, Daniel M.; Kim, Su-Young; Japuntich, Sandra J.; Smith, Stevens S.; Niederdeppe, Jeff; Cannon, Dale S.; Baker, Timothy B.

    2008-01-01

    The construct of tobacco dependence is important from both scientific and public health perspectives, but it is poorly understood. The current research integrates person-centered analyses (e.g., latent profile analysis) and variable-centered analyses (e.g., exploratory factor analysis) to understand better the latent structure of dependence and to guide distillation of the phenotype. Using data from four samples of smokers (including treatment and non-treatment samples), latent profiles were derived using the Wisconsin Inventory of Smoking Dependence Motives (WISDM) subscale scores. Across all four samples, results revealed a unique latent profile that had relative elevations on four dependence motive subscales (Automaticity, Craving, Loss of Control, and Tolerance). Variable-centered analyses supported the uniqueness of these four subscales both as measures of a common factor distinct from that underlying the other nine subscales, and as the strongest predictors of relapse, withdrawal and other dependence criteria. Conversely, the remaining nine motives carried little unique predictive validity regarding dependence. Applications of a factor mixture model further support the presence of a unique class of smokers in relation to a common factor underlying the four subscales. The results illustrate how person-centered analyses may be useful as a supplement to variable-centered analyses for uncovering variables that are necessary and/or sufficient predictors of disorder criteria, as they may uncover small segments of a population in which the variables are uniquely distributed. The results also suggest that severe dependence is associated with a pattern of smoking that is heavy, pervasive, automatic and relatively unresponsive to instrumental contingencies. PMID:19025223

  20. Frictional behavior of large displacement experimental faults

    USGS Publications Warehouse

    Beeler, N.M.; Tullis, T.E.; Blanpied, M.L.; Weeks, J.D.

    1996-01-01

    The coefficient of friction and velocity dependence of friction of initially bare surfaces and 1-mm-thick simulated fault gouges (400 mm at 25??C and 25 MPa normal stress. Steady state negative friction velocity dependence and a steady state fault zone microstructure are achieved after ???18 mm displacement, and an approximately constant strength is reached after a few tens of millimeters of sliding on initially bare surfaces. Simulated fault gouges show a large but systematic variation of friction, velocity dependence of friction, dilatancy, and degree of localization with displacement. At short displacement (<10 mm), simulated gouge is strong, velocity strengthening and changes in sliding velocity are accompanied by relatively large changes in dilatancy rate. With continued displacement, simulated gouges become progressively weaker and less velocity strengthening, the velocity dependence of dilatancy rate decreases, and deformation becomes localized into a narrow basal shear which at its most localized is observed to be velocity weakening. With subsequent displacement, the fault restrengthens, returns to velocity strengthening, or to velocity neutral, the velocity dependence of dilatancy rate becomes larger, and deformation becomes distributed. Correlation of friction, velocity dependence of friction and of dilatancy rate, and degree of localization at all displacements in simulated gouge suggest that all quantities are interrelated. The observations do not distinguish the independent variables but suggest that the degree of localization is controlled by the fault strength, not by the friction velocity dependence. The friction velocity dependence and velocity dependence of dilatancy rate can be used as qualitative measures of the degree of localization in simulated gouge, in agreement with previous studies. Theory equating the friction velocity dependence of simulated gouge to the sum of the friction velocity dependence of bare surfaces and the velocity dependence of dilatancy rate of simulated gouge fails to quantitatively account for the experimental observations.

  1. Fractionation of lithium isotopes in magmatic systems as a natural consequence of cooling

    NASA Astrophysics Data System (ADS)

    Gallagher, Kerry; Elliott, Tim

    2009-02-01

    High-temperature, diffusive fractionation has been invoked to account for striking Li isotopic variability recently observed within individual phenocrysts and xenolith minerals. It has been argued that chemical potential gradients required to drive such diffusion arise from changes in Li partitioning between coexisting phases during cooling. If so, Li isotopic zoning should be a common occurrence but the role of temperature-dependent partition coefficients in generating Li isotopic variability remains to be tested in a quantitative manner. Here we consider a basic scenario of a phenocryst in a cooling lava, using simple parameterisations of the temperature dependence of Li partitioning and diffusivity in clinopyroxene. Our model initially produces an asymmetric isotope profile across the crystal with a δ7Li minimum that remains close to the edge of a crystal. Such a distinctive shape mimics Li isotopic profiles documented in some olivine and clinopyroxene phenocrysts, which have isotopically normal cores but anomalously light rims. The temperature dependence of both the diffusivity and the partition coefficient of Li are key factors in generating this form of diffusion profile. Continued diffusion leads to an inversion in the sense of isotopic change between core and rim and results in the whole phenocryst attaining markedly light isotopic values. Our calculations show that significant Li isotopic zoning can occur as a natural consequence of cooling magmatic systems. Crystals that have experienced more complex thermal histories (e.g. re-entrained cumulates versus true phenocrysts) will therefore exhibit contrasting isotopic profiles and, as such, these data may be useful for tracing sub-volcanic processes.

  2. Skill of real-time operational forecasts with the APCC multi-model ensemble prediction system during the period 2008-2015

    NASA Astrophysics Data System (ADS)

    Min, Young-Mi; Kryjov, Vladimir N.; Oh, Sang Myeong; Lee, Hyun-Ju

    2017-12-01

    This paper assesses the real-time 1-month lead forecasts of 3-month (seasonal) mean temperature and precipitation on a monthly basis issued by the Asia-Pacific Economic Cooperation Climate Center (APCC) for 2008-2015 (8 years, 96 forecasts). It shows the current level of the APCC operational multi-model prediction system performance. The skill of the APCC forecasts strongly depends on seasons and regions that it is higher for the tropics and boreal winter than for the extratropics and boreal summer due to direct effects and remote teleconnections from boundary forcings. There is a negative relationship between the forecast skill and its interseasonal variability for both variables and the forecast skill for precipitation is more seasonally and regionally dependent than that for temperature. The APCC operational probabilistic forecasts during this period show a cold bias (underforecasting of above-normal temperature and overforecasting of below-normal temperature) underestimating a long-term warming trend. A wet bias is evident for precipitation, particularly in the extratropical regions. The skill of both temperature and precipitation forecasts strongly depends upon the ENSO strength. Particularly, the highest forecast skill noted in 2015/2016 boreal winter is associated with the strong forcing of an extreme El Nino event. Meanwhile, the relatively low skill is associated with the transition and/or continuous ENSO-neutral phases of 2012-2014. As a result the skill of real-time forecast for boreal winter season is higher than that of hindcast. However, on average, the level of forecast skill during the period 2008-2015 is similar to that of hindcast.

  3. Associated and mediating variables related to quality of life among service users with mental disorders.

    PubMed

    Fleury, Marie-Josée; Grenier, Guy; Bamvita, Jean-Marie

    2018-02-01

    This study aimed to identify variables associated with quality of life (QoL) and mediating variables among 338 service users with mental disorders in Quebec (Canada). Data were collected using nine standardized questionnaires and participant medical records. Quality of life was assessed with the Satisfaction with Life Domains Scale. Independent variables were organized into a six-block conceptual framework. Using structural equation modeling, associated and mediating variables related to QoL were identified. Lower seriousness of needs was the strongest variable associated with QoL, followed by recovery, greater service continuity, gender (male), adequacy of help received, not living alone, absence of substance use or mood disorders, and higher functional status, in that order. Recovery was the single mediating variable linking lower seriousness of needs, higher service continuity, and reduced alcohol use with QoL. Findings suggest that greater service continuity creates favorable conditions for recovery, reducing seriousness of needs and increasing QoL among service users. Lack of recovery-oriented services may affect QoL among alcohol users, as substance use disorders were associated directly and negatively with QoL. Decision makers and mental health professionals should promote service continuity, and closer collaboration between primary care and specialized services, while supporting recovery-oriented services that encourage service user involvement in their treatment and follow-up. Community-based organizations should aim to reduce the seriousness of needs particularly for female service users and those living alone.

  4. Kicking the Camel: Adolescent Smoking Behaviors after Two Years.

    ERIC Educational Resources Information Center

    Shillington, Audrey M.; Clapp, John D.

    2000-01-01

    Public Health Model was used to examine relationships between smoking severity (never smokers, former smokers, continued smokers) and host and environmental variables. Results indicate former smokers are more like never smokers on most risk and protective variables. Final analyses indicated continued smokers are more likely to be Non-Black and…

  5. Relationships between Motivations for Community Service Participation and Desire to Continue Service Following College

    ERIC Educational Resources Information Center

    Soria, Krista M.; Thomas-Card, Traci

    2014-01-01

    In this study, we explored whether college students' motivations for participating in community service were associated with their perceptions that service enhanced their desire to continue participating in communityfocused activities after graduation, after statistically controlling for demographic variables and other variables of interest.…

  6. GY SAMPLING THEORY AND GEOSTATISTICS: ALTERNATE MODELS OF VARIABILITY IN CONTINUOUS MEDIA

    EPA Science Inventory



    In the sampling theory developed by Pierre Gy, sample variability is modeled as the sum of a set of seven discrete error components. The variogram used in geostatisties provides an alternate model in which several of Gy's error components are combined in a continuous mode...

  7. Multi-stage Continuous Culture Fermentation of Glucose-Xylose Mixtures to Fuel Ethanol using Genetically Engineered Saccharomyces cerevisiae 424A

    EPA Science Inventory

    Multi-stage continuous (chemostat) culture fermentation (MCCF) with variable fermentor volumes was carried out to study utilizing glucose and xylose for ethanol production by means of mixed sugar fermentation (MSF). Variable fermentor volumes were used to enable enhanced sugar u...

  8. Measurement of Tanning Dependence

    PubMed Central

    Heckman, C.J.; Darlow, S.; Kloss, J.D.; Cohen-Filipic, J.; Manne, S.L.; Munshi, T.; Yaroch, A.L.; Perlis, C.

    2014-01-01

    Background Indoor tanning has been found to be addictive. However, the most commonly-used tanning dependence measures have not been well-validated. Objective The study’s purpose was to explore the psychometric characteristics of and compare the mCAGE (modified Cut-down, Annoyed, Guilty, Eye-opener Scale), mDSM-IV-TR (modified Diagnostic and Statistical Manual of Mental Disorders – Fourth Edition - Text Revised), and TAPS (Tanning Pathology Scale) measures of tanning dependence and provide recommendations for research and practice. Methods This study was a cross-sectional online survey with 18–25 year old female university students. The main outcome variable was tanning dependence measured by the mCAGE, mDSM-IV-TR, and TAPS. Results Internal consistency of the TAPS subscales was good but was poor for the mCAGE and mDSM-IV-TR, except when their items were combined. Agreement between the mCAGE and mDSM-IV-TR was fair. Factor analysis of the TAPS confirmed the current four-factor structure. All of the tanning dependence scales were significantly correlated with one another. Likewise, most of the tanning dependence scales were significantly correlated with other measures of tanning attitudes and behaviors. However, the tolerance to tanning TAPS subscale was not significantly correlated with any measure of tanning attitudes or behaviors and had the lowest subscale internal reliability and eigenvalues. Conclusion Based on the data and existing literature, we make recommendations for the continued use of tanning dependence measures. Intervention may be needed for the approximately 5% of college women who tend to be classified as tanning dependent across measures. Monitoring of individuals reporting tanning dependence symptoms is warranted. PMID:23980870

  9. Neonates and Infants Discharged Home Dependent on Medical Technology: Characteristics and Outcomes.

    PubMed

    Toly, Valerie Boebel; Musil, Carol M; Bieda, Amy; Barnett, Kimberly; Dowling, Donna A; Sattar, Abdus

    2016-10-01

    Preterm neonates and neonates with complex conditions admitted to a neonatal intensive care unit (NICU) may require medical technology (eg, supplemental oxygen, feeding tubes) for their continued survival at hospital discharge. Medical technology introduces another layer of complexity for parents, including specialized education about neonatal assessment and operation of technology. The transition home presents a challenge for parents and has been linked with greater healthcare utilization. To determine incidence, characteristics, and healthcare utilization outcomes (emergency room visits, rehospitalizations) of technology-dependent neonates and infants following initial discharge from the hospital. This descriptive, correlational study used retrospective medical record review to examine technology-dependent neonates (N = 71) upon discharge home. Study variables included demographic characteristics, hospital length of stay, and type of medical technology used. Analysis of neonates (n = 22) with 1-year postdischarge data was conducted to identify relationships with healthcare utilization. Descriptive and regression analyses were performed. Approximately 40% of the technology-dependent neonates were between 23 and 26 weeks' gestation, with birth weight of less than 1000 g. Technologies used most frequently were supplemental oxygen (66%) and feeding tubes (46.5%). The mean total hospital length of stay for technology-dependent versus nontechnology-dependent neonates was 108.6 and 25.7 days, respectively. Technology-dependent neonates who were female, with a gastrostomy tube, or with longer initial hospital length of stay were at greater risk for rehospitalization. Assessment and support of families, particularly mothers of technology-dependent neonates following initial hospital discharge, are vital. Longitudinal studies to determine factors affecting long-term outcomes of technology-dependent infants are needed.

  10. Modified Regression Correlation Coefficient for Poisson Regression Model

    NASA Astrophysics Data System (ADS)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  11. Cognitive functioning and insight in schizophrenia and in schizoaffective disorder.

    PubMed

    Birindelli, Nadia; Montemagni, Cristiana; Crivelli, Barbara; Bava, Irene; Mancini, Irene; Rocca, Paola

    2014-01-01

    The aim of this study was to investigate cognitive functioning and insight of illness in two groups of patients during their stable phases, one with schizophrenia and one with schizoaffective disorder. We recruited 104 consecutive outpatients, 64 with schizophrenia, 40 with schizoaffective disorder, in the period between July 2010 and July 2011. They all fulfilled formal Diagnostic and Statistical Manual of Mental disorders (DSM-IV-TR) diagnostic criteria for schizophrenia and schizoaffective disorder. Psychiatric assessment included the Clinical Global Impression Scale-Severity (CGI-S), the Positive and Negative Sindrome Scale (PANSS), the Calgary Depression Scale for Schizophrenia (CDSS) and the Global Assessment of Functioning (GAF). Insight of illness was evaluated using SUMD. Neuropsychological assessment included Winsconsin Card Sorting Test (WCST), California Verbal Learning Test (CVLT), Stroop Test and Trail Making Test (TMT). Differences between the groups were tested using Chi-square test for categorical variables and one-way analysis of variance (ANOVA) for continuous variables. All variables significantly different between the two groups of subjects were subsequently analysed using a logistic regression with a backward stepwise procedure using diagnosis (schizophrenia/schizoaffective disorder) as dependent variable. After backward selection of variables, four variables predicted a schizoaffective disorder diagnosis: marital status, a higher number of admission, better attentive functions and awareness of specific signs or symptoms of disease. The prediction model accounted for 55% of the variance of schizoaffective disorder diagnosis. With replication, our findings would allow higher diagnostic accuracy and have an impact on clinical decision making, in light of an amelioration of vocational functioning.

  12. Joint modelling rationale for chained equations

    PubMed Central

    2014-01-01

    Background Chained equations imputation is widely used in medical research. It uses a set of conditional models, so is more flexible than joint modelling imputation for the imputation of different types of variables (e.g. binary, ordinal or unordered categorical). However, chained equations imputation does not correspond to drawing from a joint distribution when the conditional models are incompatible. Concurrently with our work, other authors have shown the equivalence of the two imputation methods in finite samples. Methods Taking a different approach, we prove, in finite samples, sufficient conditions for chained equations and joint modelling to yield imputations from the same predictive distribution. Further, we apply this proof in four specific cases and conduct a simulation study which explores the consequences when the conditional models are compatible but the conditions otherwise are not satisfied. Results We provide an additional “non-informative margins” condition which, together with compatibility, is sufficient. We show that the non-informative margins condition is not satisfied, despite compatible conditional models, in a situation as simple as two continuous variables and one binary variable. Our simulation study demonstrates that as a consequence of this violation order effects can occur; that is, systematic differences depending upon the ordering of the variables in the chained equations algorithm. However, the order effects appear to be small, especially when associations between variables are weak. Conclusions Since chained equations is typically used in medical research for datasets with different types of variables, researchers must be aware that order effects are likely to be ubiquitous, but our results suggest they may be small enough to be negligible. PMID:24559129

  13. Quantum cryptography approaching the classical limit.

    PubMed

    Weedbrook, Christian; Pirandola, Stefano; Lloyd, Seth; Ralph, Timothy C

    2010-09-10

    We consider the security of continuous-variable quantum cryptography as we approach the classical limit, i.e., when the unknown preparation noise at the sender's station becomes significantly noisy or thermal (even by as much as 10(4) times greater than the variance of the vacuum mode). We show that, provided the channel transmission losses do not exceed 50%, the security of quantum cryptography is not dependent on the channel transmission, and is therefore incredibly robust against significant amounts of excess preparation noise. We extend these results to consider for the first time quantum cryptography at wavelengths considerably longer than optical and find that regions of security still exist all the way down to the microwave.

  14. Spontaneous density fluctuations in granular flow and traffic

    NASA Astrophysics Data System (ADS)

    Herrmann, Hans J.

    It is known that spontaneous density waves appear in granular material flowing through pipes or hoppers. A similar phenomenon is known from traffic jams on highways. Using numerical simulations we show that several types of waves exist and find that the density fluctuations follow a power law spectrum. We also investigate one-dimensional traffic models. If positions and velocities are continuous variables the model shows self-organized criticality driven by the slowest car. Lattice gas and lattice Boltzmann models reproduce the experimentally observed effects. Density waves are spontaneously generated when the viscosity has a non-linear dependence on density or shear rate as it is the case in traffic or granular flow.

  15. Factorial Comparison of Working Memory Models

    PubMed Central

    van den Berg, Ronald; Awh, Edward; Ma, Wei Ji

    2014-01-01

    Three questions have been prominent in the study of visual working memory limitations: (a) What is the nature of mnemonic precision (e.g., quantized or continuous)? (b) How many items are remembered? (c) To what extent do spatial binding errors account for working memory failures? Modeling studies have typically focused on comparing possible answers to a single one of these questions, even though the result of such a comparison might depend on the assumed answers to both others. Here, we consider every possible combination of previously proposed answers to the individual questions. Each model is then a point in a 3-factor model space containing a total of 32 models, of which only 6 have been tested previously. We compare all models on data from 10 delayed-estimation experiments from 6 laboratories (for a total of 164 subjects and 131,452 trials). Consistently across experiments, we find that (a) mnemonic precision is not quantized but continuous and not equal but variable across items and trials; (b) the number of remembered items is likely to be variable across trials, with a mean of 6.4 in the best model (median across subjects); (c) spatial binding errors occur but explain only a small fraction of responses (16.5% at set size 8 in the best model). We find strong evidence against all 6 documented models. Our results demonstrate the value of factorial model comparison in working memory. PMID:24490791

  16. The relationship between happiness and intelligent quotient: the contribution of socio-economic and clinical factors.

    PubMed

    Ali, A; Ambler, G; Strydom, A; Rai, D; Cooper, C; McManus, S; Weich, S; Meltzer, H; Dein, S; Hassiotis, A

    2013-06-01

    Happiness and higher intelligent quotient (IQ) are independently related to positive health outcomes. However, there are inconsistent reports about the relationship between IQ and happiness. The aim was to examine the association between IQ and happiness and whether it is mediated by social and clinical factors. Method The authors analysed data from the 2007 Adult Psychiatric Morbidity Survey in England. The participants were adults aged 16 years or over, living in private households in 2007. Data from 6870 participants were included in the study. Happiness was measured using a validated question on a three-point scale. Verbal IQ was estimated using the National Adult Reading Test and both categorical and continuous IQ was analysed. Happiness is significantly associated with IQ. Those in the lowest IQ range (70-99) reported the lowest levels of happiness compared with the highest IQ group (120-129). Mediation analysis using the continuous IQ variable found dependency in activities of daily living, income, health and neurotic symptoms were strong mediators of the relationship, as they reduced the association between happiness and IQ by 50%. Those with lower IQ are less happy than those with higher IQ. Interventions that target modifiable variables such as income (e.g. through enhancing education and employment opportunities) and neurotic symptoms (e.g. through better detection of mental health problems) may improve levels of happiness in the lower IQ groups.

  17. Estimating mutual information using B-spline functions – an improved similarity measure for analysing gene expression data

    PubMed Central

    Daub, Carsten O; Steuer, Ralf; Selbig, Joachim; Kloska, Sebastian

    2004-01-01

    Background The information theoretic concept of mutual information provides a general framework to evaluate dependencies between variables. In the context of the clustering of genes with similar patterns of expression it has been suggested as a general quantity of similarity to extend commonly used linear measures. Since mutual information is defined in terms of discrete variables, its application to continuous data requires the use of binning procedures, which can lead to significant numerical errors for datasets of small or moderate size. Results In this work, we propose a method for the numerical estimation of mutual information from continuous data. We investigate the characteristic properties arising from the application of our algorithm and show that our approach outperforms commonly used algorithms: The significance, as a measure of the power of distinction from random correlation, is significantly increased. This concept is subsequently illustrated on two large-scale gene expression datasets and the results are compared to those obtained using other similarity measures. A C++ source code of our algorithm is available for non-commercial use from kloska@scienion.de upon request. Conclusion The utilisation of mutual information as similarity measure enables the detection of non-linear correlations in gene expression datasets. Frequently applied linear correlation measures, which are often used on an ad-hoc basis without further justification, are thereby extended. PMID:15339346

  18. Quantum communication with coherent states of light

    NASA Astrophysics Data System (ADS)

    Khan, Imran; Elser, Dominique; Dirmeier, Thomas; Marquardt, Christoph; Leuchs, Gerd

    2017-06-01

    Quantum communication offers long-term security especially, but not only, relevant to government and industrial users. It is worth noting that, for the first time in the history of cryptographic encoding, we are currently in the situation that secure communication can be based on the fundamental laws of physics (information theoretical security) rather than on algorithmic security relying on the complexity of algorithms, which is periodically endangered as standard computer technology advances. On a fundamental level, the security of quantum key distribution (QKD) relies on the non-orthogonality of the quantum states used. So even coherent states are well suited for this task, the quantum states that largely describe the light generated by laser systems. Depending on whether one uses detectors resolving single or multiple photon states or detectors measuring the field quadratures, one speaks of, respectively, a discrete- or a continuous-variable description. Continuous-variable QKD with coherent states uses a technology that is very similar to the one employed in classical coherent communication systems, the backbone of today's Internet connections. Here, we review recent developments in this field in two connected regimes: (i) improving QKD equipment by implementing front-end telecom devices and (ii) research into satellite QKD for bridging long distances by building upon existing optical satellite links. This article is part of the themed issue 'Quantum technology for the 21st century'.

  19. Quantum communication with coherent states of light.

    PubMed

    Khan, Imran; Elser, Dominique; Dirmeier, Thomas; Marquardt, Christoph; Leuchs, Gerd

    2017-08-06

    Quantum communication offers long-term security especially, but not only, relevant to government and industrial users. It is worth noting that, for the first time in the history of cryptographic encoding, we are currently in the situation that secure communication can be based on the fundamental laws of physics (information theoretical security) rather than on algorithmic security relying on the complexity of algorithms, which is periodically endangered as standard computer technology advances. On a fundamental level, the security of quantum key distribution (QKD) relies on the non-orthogonality of the quantum states used. So even coherent states are well suited for this task, the quantum states that largely describe the light generated by laser systems. Depending on whether one uses detectors resolving single or multiple photon states or detectors measuring the field quadratures, one speaks of, respectively, a discrete- or a continuous-variable description. Continuous-variable QKD with coherent states uses a technology that is very similar to the one employed in classical coherent communication systems, the backbone of today's Internet connections. Here, we review recent developments in this field in two connected regimes: (i) improving QKD equipment by implementing front-end telecom devices and (ii) research into satellite QKD for bridging long distances by building upon existing optical satellite links.This article is part of the themed issue 'Quantum technology for the 21st century'. © 2017 The Author(s).

  20. Cortical Components of Reaction-Time during Perceptual Decisions in Humans.

    PubMed

    Dmochowski, Jacek P; Norcia, Anthony M

    2015-01-01

    The mechanisms of perceptual decision-making are frequently studied through measurements of reaction time (RT). Classical sequential-sampling models (SSMs) of decision-making posit RT as the sum of non-overlapping sensory, evidence accumulation, and motor delays. In contrast, recent empirical evidence hints at a continuous-flow paradigm in which multiple motor plans evolve concurrently with the accumulation of sensory evidence. Here we employ a trial-to-trial reliability-based component analysis of encephalographic data acquired during a random-dot motion task to directly image continuous flow in the human brain. We identify three topographically distinct neural sources whose dynamics exhibit contemporaneous ramping to time-of-response, with the rate and duration of ramping discriminating fast and slow responses. Only one of these sources, a parietal component, exhibits dependence on strength-of-evidence. The remaining two components possess topographies consistent with origins in the motor system, and their covariation with RT overlaps in time with the evidence accumulation process. After fitting the behavioral data to a popular SSM, we find that the model decision variable is more closely matched to the combined activity of the three components than to their individual activity. Our results emphasize the role of motor variability in shaping RT distributions on perceptual decision tasks, suggesting that physiologically plausible computational accounts of perceptual decision-making must model the concurrent nature of evidence accumulation and motor planning.

  1. Resilient Brain Aging: Characterization of Discordance between Alzheimer’s Disease Pathology and Cognition

    PubMed Central

    Negash, Selam; Wilson, Robert S.; Leurgans, Sue E.; Wolk, David A.; Schneider, Julie A.; Buchman, Aron S.; Bennett, David A.; Arnold, Steven. E.

    2014-01-01

    Background Although it is now evident that normal cognition can occur despite significant AD pathology, few studies have attempted to characterize this discordance, or examine factors that may contribute to resilient brain aging in the setting of AD pathology. Methods More than 2,000 older persons underwent annual evaluation as part of participation in the Religious Orders Study or Rush Memory Aging Project. A total of 966 subjects who had brain autopsy and comprehensive cognitive testing proximate to death were analyzed. Resilience was quantified as a continuous measure using linear regression modeling, where global cognition was entered as a dependent variable and global pathology was an independent variable. Studentized residuals generated from the model represented the discordance between cognition and pathology, and served as measure of resilience. The relation of resilience index to known risk factors for AD and related variables was examined. Results Multivariate regression models that adjusted for demographic variables revealed significant associations for early life socioeconomic status, reading ability, APOE-ε4 status, and past cognitive activity. A stepwise regression model retained reading level (estimate = 0.10, SE = 0.02; p < 0.0001) and past cognitive activity (estimate = 0.27, SE = 0.09; p = 0.002), suggesting the potential mediating role of these variables for resilience. Conclusions The construct of resilient brain aging can provide a framework for quantifying the discordance between cognition and pathology, and help identify factors that may mediate this relationship. PMID:23919768

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Mark S.; Son Wonmin; Heaney, Libby

    Recently, it was demonstrated by Son et al., Phys. Rev. Lett. 102, 110404 (2009), that a separable bipartite continuous-variable quantum system can violate the Clauser-Horne-Shimony-Holt (CHSH) inequality via operationally local transformations. Operationally local transformations are parametrized only by local variables; however, in order to allow violation of the CHSH inequality, a maximally entangled ancilla was necessary. The use of the entangled ancilla in this scheme caused the state under test to become dependent on the measurement choice one uses to calculate the CHSH inequality, thus violating one of the assumptions used in deriving a Bell inequality, namely, the free willmore » or statistical independence assumption. The novelty in this scheme however is that the measurement settings can be external free parameters. In this paper, we generalize these operationally local transformations for multipartite Bell inequalities (with dichotomic observables) and provide necessary and sufficient conditions for violation within this scheme. Namely, a violation of a multipartite Bell inequality in this setting is contingent on whether an ancillary system admits any realistic local hidden variable model (i.e., whether the ancilla violates the given Bell inequality). These results indicate that violation of a Bell inequality performed on a system does not necessarily imply that the system is nonlocal. In fact, the system under test may be completely classical. However, nonlocality must have resided somewhere, this may have been in the environment, the physical variables used to manipulate the system or the detectors themselves provided the measurement settings are external free variables.« less

  3. A formal method for identifying distinct states of variability in time-varying sources: SGR A* as an example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, L.; Witzel, G.; Ghez, A. M.

    2014-08-10

    Continuously time variable sources are often characterized by their power spectral density and flux distribution. These quantities can undergo dramatic changes over time if the underlying physical processes change. However, some changes can be subtle and not distinguishable using standard statistical approaches. Here, we report a methodology that aims to identify distinct but similar states of time variability. We apply this method to the Galactic supermassive black hole, where 2.2 μm flux is observed from a source associated with Sgr A* and where two distinct states have recently been suggested. Our approach is taken from mathematical finance and works withmore » conditional flux density distributions that depend on the previous flux value. The discrete, unobserved (hidden) state variable is modeled as a stochastic process and the transition probabilities are inferred from the flux density time series. Using the most comprehensive data set to date, in which all Keck and a majority of the publicly available Very Large Telescope data have been merged, we show that Sgr A* is sufficiently described by a single intrinsic state. However, the observed flux densities exhibit two states: noise dominated and source dominated. Our methodology reported here will prove extremely useful to assess the effects of the putative gas cloud G2 that is on its way toward the black hole and might create a new state of variability.« less

  4. Copula Models for Sociology: Measures of Dependence and Probabilities for Joint Distributions

    ERIC Educational Resources Information Center

    Vuolo, Mike

    2017-01-01

    Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…

  5. Mating tactics determine patterns of condition dependence in a dimorphic horned beetle.

    PubMed

    Knell, Robert J; Simmons, Leigh W

    2010-08-07

    The persistence of genetic variability in performance traits such as strength is surprising given the directional selection that such traits experience, which should cause the fixation of the best genetic variants. One possible explanation is 'genic capture' which is usually considered as a candidate mechanism for the maintenance of high genetic variability in sexual signalling traits. This states that if a trait is 'condition dependent', with expression being strongly influenced by the bearer's overall viability, then genetic variability can be maintained via mutation-selection balance. Using a species of dimorphic beetle with males that gain matings either by fighting or by 'sneaking', we tested the prediction of strong condition dependence for strength, walking speed and testes mass. Strength was strongly condition dependent only in those beetles that fight for access to females. Walking speed, with less of an obvious selective advantage, showed no condition dependence, and testes mass was more condition dependent in sneaks, which engage in higher levels of sperm competition. Within a species, therefore, condition dependent expression varies between morphs, and corresponds to the specific selection pressures experienced by that morph. These results support genic capture as a general explanation for the maintenance of genetic variability in traits under directional selection.

  6. Occupancy in continuous habitat

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.

    2012-01-01

    The probability that a site has at least one individual of a species ('occupancy') has come to be widely used as a state variable for animal population monitoring. The available statistical theory for estimation when detection is imperfect applies particularly to habitat patches or islands, although it is also used for arbitrary plots in continuous habitat. The probability that such a plot is occupied depends on plot size and home-range characteristics (size, shape and dispersion) as well as population density. Plot size is critical to the definition of occupancy as a state variable, but clear advice on plot size is missing from the literature on the design of occupancy studies. We describe models for the effects of varying plot size and home-range size on expected occupancy. Temporal, spatial, and species variation in average home-range size is to be expected, but information on home ranges is difficult to retrieve from species presence/absence data collected in occupancy studies. The effect of variable home-range size is negligible when plots are very large (>100 x area of home range), but large plots pose practical problems. At the other extreme, sampling of 'point' plots with cameras or other passive detectors allows the true 'proportion of area occupied' to be estimated. However, this measure equally reflects home-range size and density, and is of doubtful value for population monitoring or cross-species comparisons. Plot size is ill-defined and variable in occupancy studies that detect animals at unknown distances, the commonest example being unlimited-radius point counts of song birds. We also find that plot size is ill-defined in recent treatments of "multi-scale" occupancy; the respective scales are better interpreted as temporal (instantaneous and asymptotic) rather than spatial. Occupancy is an inadequate metric for population monitoring when it is confounded with home-range size or detection distance.

  7. Permissive Attitude Towards Drug Use, Life Satisfaction, and Continuous Drug Use Among Psychoactive Drug Users in Hong Kong.

    PubMed

    Cheung, N Wt; Cheung, Y W; Chen, X

    2016-06-01

    To examine the effects of a permissive attitude towards regular and occasional drug use, life satisfaction, self-esteem, depression, and other psychosocial variables in the drug use of psychoactive drug users. Psychosocial factors that might affect a permissive attitude towards regular / occasional drug use and life satisfaction were further explored. We analysed data of a sample of psychoactive drug users from a longitudinal survey of psychoactive drug abusers in Hong Kong who were interviewed at 6 time points at 6-month intervals between January 2009 and December 2011. Data of the second to the sixth time points were stacked into an individual time point structure. Random-effects probit regression analysis was performed to estimate the relative contribution of the independent variables to the binary dependent variable of drug use in the last 30 days. A permissive attitude towards drug use, life satisfaction, and depression at the concurrent time point, and self-esteem at the previous time point had direct effects on drug use in the last 30 days. Interestingly, permissiveness to occasional drug use was a stronger predictor of drug use than permissiveness to regular drug use. These 2 permissive attitude variables were affected by the belief that doing extreme things shows the vitality of young people (at concurrent time point), life satisfaction (at concurrent time point), and self-esteem (at concurrent and previous time points). Life satisfaction was affected by sense of uncertainty about the future (at concurrent time point), self-esteem (at concurrent time point), depression (at both concurrent and previous time points), and being stricken by stressful events (at previous time point). A number of psychosocial factors could affect the continuation or discontinuation of drug use, as well as the permissive attitude towards regular and occasional drug use, and life satisfaction. Implications of the findings for prevention and intervention work targeted at psychoactive drug users are discussed.

  8. Comparing Public and Private Institutions That Have and Have Not Implemented Enterprise Resource Planning (ERP) Systems: A Resource Dependence Perspective

    ERIC Educational Resources Information Center

    Sendhil, Geetha R.

    2012-01-01

    The purpose of this national study was to utilize quantitative methods to examine institutional characteristics, financial resource variables, personnel variables, and customer variables of public and private institutions that have and have not implemented enterprise resource planning (ERP) systems, from a resource dependence perspective.…

  9. Correlation spectrometer for filtering of (quasi) elastic neutron scattering with variable resolution

    NASA Astrophysics Data System (ADS)

    Magazù, Salvatore; Mezei, Ferenc; Migliardo, Federica

    2018-05-01

    In a variety of applications of inelastic neutron scattering spectroscopy the goal is to single out the elastic scattering contribution from the total scattered spectrum as a function of momentum transfer and sample environment parameters. The elastic part of the spectrum is defined in such a case by the energy resolution of the spectrometer. Variable elastic energy resolution offers a way to distinguish between elastic and quasi-elastic intensities. Correlation spectroscopy lends itself as an efficient, high intensity approach for accomplishing this both at continuous and pulsed neutron sources. On the one hand, in beam modulation methods the Liouville theorem coupling between intensity and resolution is relaxed and time-of-flight velocity analysis of the neutron velocity distribution can be performed with 50 % duty factor exposure for all available resolutions. On the other hand, the (quasi)elastic part of the spectrum generally contains the major part of the integrated intensity at a given detector, and thus correlation spectroscopy can be applied with most favorable signal to statistical noise ratio. The novel spectrometer CORELLI at SNS is an example for this type of application of the correlation technique at a pulsed source. On a continuous neutron source a statistical chopper can be used for quasi-random time dependent beam modulation and the total time-of-flight of the neutron from the statistical chopper to detection is determined by the analysis of the correlation between the temporal fluctuation of the neutron detection rate and the statistical chopper beam modulation pattern. The correlation analysis can either be used for the determination of the incoming neutron velocity or for the scattered neutron velocity, depending of the position of the statistical chopper along the neutron trajectory. These two options are considered together with an evaluation of spectrometer performance compared to conventional spectroscopy, in particular for variable resolution elastic neutron scattering (RENS) studies of relaxation processes and the evolution of mean square displacements. A particular focus of our analysis is the unique feature of correlation spectroscopy of delivering high and resolution independent beam intensity, thus the same statistical chopper scan contains both high intensity and high resolution information at the same time, and can be evaluated both ways. This flexibility for variable resolution data handling represents an additional asset for correlation spectroscopy in variable resolution work. Changing the beam width for the same statistical chopper allows us to additionally trade resolution for intensity in two different experimental runs, similarly for conventional single slit chopper spectroscopy. The combination of these two approaches is a capability of particular value in neutron spectroscopy studies requiring variable energy resolution, such as the systematic study of quasi-elastic scattering and mean square displacement. Furthermore the statistical chopper approach is particularly advantageous for studying samples with low scattering intensity in the presence of a high, sample independent background.

  10. QKD Via a Quantum Wavelength Router Using Spatial Soliton

    NASA Astrophysics Data System (ADS)

    Kouhnavard, M.; Amiri, I. S.; Afroozeh, A.; Jalil, M. A.; Ali, J.; Yupapin, P. P.

    2011-05-01

    A system for continuous variable quantum key distribution via a wavelength router is proposed. The Kerr type of light in the nonlinear microring resonator (NMRR) induces the chaotic behavior. In this proposed system chaotic signals are generated by an optical soliton or Gaussian pulse within a NMRR system. The parameters, such as input power, MRRs radii and coupling coefficients can change and plays important role in determining the results in which the continuous signals are generated spreading over the spectrum. Large bandwidth signals of optical soliton are generated by the input pulse propagating within the MRRs, which is allowed to form the continuous wavelength or frequency with large tunable channel capacity. The continuous variable QKD is formed by using the localized spatial soliton pulses via a quantum router and networks. The selected optical spatial pulse can be used to perform the secure communication network. Here the entangled photon generated by chaotic signals has been analyzed. The continuous entangled photon is generated by using the polarization control unit incorporating into the MRRs, required to provide the continuous variable QKD. Results obtained have shown that the application of such a system for the simultaneous continuous variable quantum cryptography can be used in the mobile telephone hand set and networks. In this study frequency band of 500 MHz and 2.0 GHz and wavelengths of 775 nm, 2,325 nm and 1.55 μm can be obtained for QKD use with input optical soliton and Gaussian beam respectively.

  11. Complementary system for long term measurements of radon exhalation rate from soil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazur, J.; Kozak, K., E-mail: Krzysztof.Kozak@ifj.edu.pl

    A special set-up for continuous measurements of radon exhalation rate from soil is presented. It was constructed at Laboratory of Radiometric Expertise, Institute of Nuclear Physics Polish Academy of Sciences (IFJ PAN), Krakow, Poland. Radon exhalation rate was determined using the AlphaGUARD PQ2000 PRO (Genitron) radon monitor together with a special accumulation container which was put on the soil surface during the measurement. A special automatic device was built and used to raise and lower back onto the ground the accumulation container. The time of raising and putting down the container was controlled by an electronic timer. This set-up mademore » it possible to perform 4–6 automatic measurements a day. Besides, some additional soil and meteorological parameters were continuously monitored. In this way, the diurnal and seasonal variability of radon exhalation rate from soil can be studied as well as its dependence on soil properties and meteorological conditions.« less

  12. A Polychoric Instrumental Variable (PIV) Estimator for Structural Equation Models with Categorical Variables

    ERIC Educational Resources Information Center

    Bollen, Kenneth A.; Maydeu-Olivares, Albert

    2007-01-01

    This paper presents a new polychoric instrumental variable (PIV) estimator to use in structural equation models (SEMs) with categorical observed variables. The PIV estimator is a generalization of Bollen's (Psychometrika 61:109-121, 1996) 2SLS/IV estimator for continuous variables to categorical endogenous variables. We derive the PIV estimator…

  13. A hazard rate analysis of fertility using duration data from Malaysia.

    PubMed

    Chang, C

    1988-01-01

    Data from the Malaysia Fertility and Family Planning Survey (MFLS) of 1974 were used to investigate the effects of biological and socioeconomic variables on fertility based on the hazard rate model. Another study objective was to investigate the robustness of the findings of Trussell et al. (1985) by comparing the findings of this study with theirs. The hazard rate of conception for the jth fecundable spell of the ith woman, hij, is determined by duration dependence, tij, measured by the waiting time to conception; unmeasured heterogeneity (HETi; the time-invariant variables, Yi (race, cohort, education, age at marriage); and time-varying variables, Xij (age, parity, opportunity cost, income, child mortality, child sex composition). In this study, all the time-varying variables were constant over a spell. An asymptotic X2 test for the equality of constant hazard rates across birth orders, allowing time-invariant variables and heterogeneity, showed the importance of time-varying variables and duration dependence. Under the assumption of fixed effects heterogeneity and the Weibull distribution for the duration of waiting time to conception, the empirical results revealed a negative parity effect, a negative impact from male children, and a positive effect from child mortality on the hazard rate of conception. The estimates of step functions for the hazard rate of conception showed parity-dependent fertility control, evidence of heterogeneity, and the possibility of nonmonotonic duration dependence. In a hazard rate model with piecewise-linear-segment duration dependence, the socioeconomic variables such as cohort, child mortality, income, and race had significant effects, after controlling for the length of the preceding birth. The duration dependence was consistant with the common finding, i.e., first increasing and then decreasing at a slow rate. The effects of education and opportunity cost on fertility were insignificant.

  14. Spatio-temporal variability of the North Sea cod recruitment in relation to temperature and zooplankton.

    PubMed

    Nicolas, Delphine; Rochette, Sébastien; Llope, Marcos; Licandro, Priscilla

    2014-01-01

    The North Sea cod (Gadus morhua, L.) stock has continuously declined over the past four decades linked with overfishing and climate change. Changes in stock structure due to overfishing have made the stock largely dependent on its recruitment success, which greatly relies on environmental conditions. Here we focus on the spatio-temporal variability of cod recruitment in an effort to detect changes during the critical early life stages. Using International Bottom Trawl Survey (IBTS) data from 1974 to 2011, a major spatio-temporal change in the distribution of cod recruits was identified in the late 1990s, characterized by a pronounced decrease in the central and southeastern North Sea stock. Other minor spatial changes were also recorded in the mid-1980s and early 1990s. We tested whether the observed changes in recruits distribution could be related with direct (i.e. temperature) and/or indirect (i.e. changes in the quantity and quality of zooplankton prey) effects of climate variability. The analyses were based on spatially-resolved time series, i.e. sea surface temperature (SST) from the Hadley Center and zooplankton records from the Continuous Plankton Recorder Survey. We showed that spring SST increase was the main driver for the most recent decrease in cod recruitment. The late 1990s were also characterized by relatively low total zooplankton biomass, particularly of energy-rich zooplankton such as the copepod Calanus finmarchicus, which have further contributed to the decline of North Sea cod recruitment. Long-term spatially-resolved observations were used to produce regional distribution models that could further be used to predict the abundance of North Sea cod recruits based on temperature and zooplankton food availability.

  15. Modeling of the effect of freezer conditions on the principal constituent parameters of ice cream by using response surface methodology.

    PubMed

    Inoue, K; Ochi, H; Taketsuka, M; Saito, H; Sakurai, K; Ichihashi, N; Iwatsuki, K; Kokubo, S

    2008-05-01

    A systematic analysis was carried out by using response surface methodology to create a quantitative model of the synergistic effects of conditions in a continuous freezer [mix flow rate (L/h), overrun (%), cylinder pressure (kPa), drawing temperature ( degrees C), and dasher speed (rpm)] on the principal constituent parameters of ice cream [rate of fat destabilization (%), mean air cell diameter (mum), and mean ice crystal diameter (mum)]. A central composite face-centered design was used for this study. Thirty-one combinations of the 5 above-mentioned freezer conditions were designed (including replicates at the center point), and ice cream samples were manufactured and examined in a continuous freezer under the selected conditions. The responses were the 3 variables given above. A quadratic model was constructed, with the freezer conditions as the independent variables and the ice cream characteristics as the dependent variables. The coefficients of determination (R(2)) were greater than 0.9 for all 3 responses, but Q(2), the index used here for the capability of the model for predicting future observed values of the responses, was negative for both the mean ice crystal diameter and the mean air cell diameter. Therefore, pruned models were constructed by removing terms that had contributed little to the prediction in the original model and by refitting the regression model. It was demonstrated that these pruned models provided good fits to the data in terms of R(2), Q(2), and ANOVA. The effects of freezer conditions were expressed quantitatively in terms of the 3 responses. The drawing temperature ( degrees C) was found to have a greater effect on ice cream characteristics than any of the other factors.

  16. Spatio-Temporal Variability of the North Sea Cod Recruitment in Relation to Temperature and Zooplankton

    PubMed Central

    Nicolas, Delphine; Rochette, Sébastien; Llope, Marcos; Licandro, Priscilla

    2014-01-01

    The North Sea cod (Gadus morhua, L.) stock has continuously declined over the past four decades linked with overfishing and climate change. Changes in stock structure due to overfishing have made the stock largely dependent on its recruitment success, which greatly relies on environmental conditions. Here we focus on the spatio-temporal variability of cod recruitment in an effort to detect changes during the critical early life stages. Using International Bottom Trawl Survey (IBTS) data from 1974 to 2011, a major spatio-temporal change in the distribution of cod recruits was identified in the late 1990s, characterized by a pronounced decrease in the central and southeastern North Sea stock. Other minor spatial changes were also recorded in the mid-1980s and early 1990s. We tested whether the observed changes in recruits distribution could be related with direct (i.e. temperature) and/or indirect (i.e. changes in the quantity and quality of zooplankton prey) effects of climate variability. The analyses were based on spatially-resolved time series, i.e. sea surface temperature (SST) from the Hadley Center and zooplankton records from the Continuous Plankton Recorder Survey. We showed that spring SST increase was the main driver for the most recent decrease in cod recruitment. The late 1990s were also characterized by relatively low total zooplankton biomass, particularly of energy-rich zooplankton such as the copepod Calanus finmarchicus, which have further contributed to the decline of North Sea cod recruitment. Long-term spatially-resolved observations were used to produce regional distribution models that could further be used to predict the abundance of North Sea cod recruits based on temperature and zooplankton food availability. PMID:24551103

  17. The nature and use of prediction skills in a biological computer simulation

    NASA Astrophysics Data System (ADS)

    Lavoie, Derrick R.; Good, Ron

    The primary goal of this study was to examine the science process skill of prediction using qualitative research methodology. The think-aloud interview, modeled after Ericsson and Simon (1984), let to the identification of 63 program exploration and prediction behaviors.The performance of seven formal and seven concrete operational high-school biology students were videotaped during a three-phase learning sequence on water pollution. Subjects explored the effects of five independent variables on two dependent variables over time using a computer-simulation program. Predictions were made concerning the effect of the independent variables upon dependent variables through time. Subjects were identified according to initial knowledge of the subject matter and success at solving three selected prediction problems.Successful predictors generally had high initial knowledge of the subject matter and were formal operational. Unsuccessful predictors generally had low initial knowledge and were concrete operational. High initial knowledge seemed to be more important to predictive success than stage of Piagetian cognitive development.Successful prediction behaviors involved systematic manipulation of the independent variables, note taking, identification and use of appropriate independent-dependent variable relationships, high interest and motivation, and in general, higher-level thinking skills. Behaviors characteristic of unsuccessful predictors were nonsystematic manipulation of independent variables, lack of motivation and persistence, misconceptions, and the identification and use of inappropriate independent-dependent variable relationships.

  18. Kendall-Theil Robust Line (KTRLine--version 1.0)-A Visual Basic Program for Calculating and Graphing Robust Nonparametric Estimates of Linear-Regression Coefficients Between Two Continuous Variables

    USGS Publications Warehouse

    Granato, Gregory E.

    2006-01-01

    The Kendall-Theil Robust Line software (KTRLine-version 1.0) is a Visual Basic program that may be used with the Microsoft Windows operating system to calculate parameters for robust, nonparametric estimates of linear-regression coefficients between two continuous variables. The KTRLine software was developed by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, for use in stochastic data modeling with local, regional, and national hydrologic data sets to develop planning-level estimates of potential effects of highway runoff on the quality of receiving waters. The Kendall-Theil robust line was selected because this robust nonparametric method is resistant to the effects of outliers and nonnormality in residuals that commonly characterize hydrologic data sets. The slope of the line is calculated as the median of all possible pairwise slopes between points. The intercept is calculated so that the line will run through the median of input data. A single-line model or a multisegment model may be specified. The program was developed to provide regression equations with an error component for stochastic data generation because nonparametric multisegment regression tools are not available with the software that is commonly used to develop regression models. The Kendall-Theil robust line is a median line and, therefore, may underestimate total mass, volume, or loads unless the error component or a bias correction factor is incorporated into the estimate. Regression statistics such as the median error, the median absolute deviation, the prediction error sum of squares, the root mean square error, the confidence interval for the slope, and the bias correction factor for median estimates are calculated by use of nonparametric methods. These statistics, however, may be used to formulate estimates of mass, volume, or total loads. The program is used to read a two- or three-column tab-delimited input file with variable names in the first row and data in subsequent rows. The user may choose the columns that contain the independent (X) and dependent (Y) variable. A third column, if present, may contain metadata such as the sample-collection location and date. The program screens the input files and plots the data. The KTRLine software is a graphical tool that facilitates development of regression models by use of graphs of the regression line with data, the regression residuals (with X or Y), and percentile plots of the cumulative frequency of the X variable, Y variable, and the regression residuals. The user may individually transform the independent and dependent variables to reduce heteroscedasticity and to linearize data. The program plots the data and the regression line. The program also prints model specifications and regression statistics to the screen. The user may save and print the regression results. The program can accept data sets that contain up to about 15,000 XY data points, but because the program must sort the array of all pairwise slopes, the program may be perceptibly slow with data sets that contain more than about 1,000 points.

  19. Choice of Variables and Preconditioning for Time Dependent Problems

    NASA Technical Reports Server (NTRS)

    Turkel, Eli; Vatsa, Verr N.

    2003-01-01

    We consider the use of low speed preconditioning for time dependent problems. These are solved using a dual time step approach. We consider the effect of this dual time step on the parameter of the low speed preconditioning. In addition, we compare the use of two sets of variables, conservation and primitive variables, to solve the system. We show the effect of these choices on both the convergence to a steady state and the accuracy of the numerical solutions for low Mach number steady state and time dependent flows.

  20. Beyond total treatment effects in randomised controlled trials: Baseline measurement of intermediate outcomes needed to reduce confounding in mediation investigations.

    PubMed

    Landau, Sabine; Emsley, Richard; Dunn, Graham

    2018-06-01

    Random allocation avoids confounding bias when estimating the average treatment effect. For continuous outcomes measured at post-treatment as well as prior to randomisation (baseline), analyses based on (A) post-treatment outcome alone, (B) change scores over the treatment phase or (C) conditioning on baseline values (analysis of covariance) provide unbiased estimators of the average treatment effect. The decision to include baseline values of the clinical outcome in the analysis is based on precision arguments, with analysis of covariance known to be most precise. Investigators increasingly carry out explanatory analyses to decompose total treatment effects into components that are mediated by an intermediate continuous outcome and a non-mediated part. Traditional mediation analysis might be performed based on (A) post-treatment values of the intermediate and clinical outcomes alone, (B) respective change scores or (C) conditioning on baseline measures of both intermediate and clinical outcomes. Using causal diagrams and Monte Carlo simulation, we investigated the performance of the three competing mediation approaches. We considered a data generating model that included three possible confounding processes involving baseline variables: The first two processes modelled baseline measures of the clinical variable or the intermediate variable as common causes of post-treatment measures of these two variables. The third process allowed the two baseline variables themselves to be correlated due to past common causes. We compared the analysis models implied by the competing mediation approaches with this data generating model to hypothesise likely biases in estimators, and tested these in a simulation study. We applied the methods to a randomised trial of pragmatic rehabilitation in patients with chronic fatigue syndrome, which examined the role of limiting activities as a mediator. Estimates of causal mediation effects derived by approach (A) will be biased if one of the three processes involving baseline measures of intermediate or clinical outcomes is operating. Necessary assumptions for the change score approach (B) to provide unbiased estimates under either process include the independence of baseline measures and change scores of the intermediate variable. Finally, estimates provided by the analysis of covariance approach (C) were found to be unbiased under all the three processes considered here. When applied to the example, there was evidence of mediation under all methods but the estimate of the indirect effect depended on the approach used with the proportion mediated varying from 57% to 86%. Trialists planning mediation analyses should measure baseline values of putative mediators as well as of continuous clinical outcomes. An analysis of covariance approach is recommended to avoid potential biases due to confounding processes involving baseline measures of intermediate or clinical outcomes, and not simply for increased precision.

  1. Malnutrition among vaccinated children aged 0-5 years in Batouri, Republic of Cameroon.

    PubMed

    Nagahori, Chikako; Kinjo, Yoshihide; Tchuani, Jean Paul; Yamauchi, Taro

    2017-12-01

    Malnutrition continues to contribute to a high infant mortality rate. This study aimed to determine the prevalence of malnutrition and its potential association with the time at which complementary feeding was introduced among children aged 0-5 years in Batouri, Republic of Cameroon. Mothers (n=212) were interviewed using a structured questionnaire. Child height or length, and weight measurements were determined and the appropriate Z -scores calculated. Multiple regression analysis was performed with the values of all nutritional status indicators as dependent variables and the time of commencing complementary feeding, and the child's age and sex, as independent variables. The prevalence of stunting (height/length for age<-2 standard deviation [SD]), underweight (weight for age<-2SD), and wasting (weight for height/length<-2SD) was 45.8%, 30.2%, and 11.3%, respectively. Even taking into consideration the biological variables, there was a significant association in the effects of time of starting complementary foods on the nutritional status indicators. Furthermore, adding socio-economic variables did not produce a rise in adjusted R 2 values for all age group models concerned. Approximately 30% of the children in the study region were underweight, and approximately half of the children exhibited stunting, indicating chronic malnutrition. Commencing complementary feeding at an appropriate time had a positive effect on nutritional status from approximately 2 years of age.

  2. Prediction and Stability of Mathematics Skill and Difficulty

    PubMed Central

    Martin, Rebecca B.; Cirino, Paul T.; Barnes, Marcia A.; Ewing-Cobbs, Linda; Fuchs, Lynn S.; Stuebing, Karla K.; Fletcher, Jack M.

    2016-01-01

    The present study evaluated the stability of math learning difficulties over a 2-year period and investigated several factors that might influence this stability (categorical vs. continuous change, liberal vs. conservative cut point, broad vs. specific math assessment); the prediction of math performance over time and by performance level was also evaluated. Participants were 144 students initially identified as having a math difficulty (MD) or no learning difficulty according to low achievement criteria in the spring of Grade 3 or Grade 4. Students were reassessed 2 years later. For both measure types, a similar proportion of students changed whether assessed categorically or continuously. However, categorical change was heavily dependent on distance from the cut point and so more common for MD, who started closer to the cut point; reliable change index change was more similar across groups. There were few differences with regard to severity level of MD on continuous metrics or in terms of prediction. Final math performance on a broad computation measure was predicted by behavioral inattention and working memory while considering initial performance; for a specific fluency measure, working memory was not uniquely related, and behavioral inattention more variably related to final performance, again while considering initial performance. PMID:22392890

  3. Effects of auditory radio interference on a fine, continuous, open motor skill.

    PubMed

    Lazar, J M; Koceja, D M; Morris, H H

    1995-06-01

    The effects of human speech on a fine, continuous, and open motor skill were examined. A tape of auditory human radio traffic was injected into a tank gunnery simulator during each training session for 4 wk. of training for 3 hr. a week. The dependent variables were identification time, fire time, kill time, systems errors, and acquisition errors. These were measured by the Unit Conduct Of Fire Trainer (UCOFT). The interference was interjected into the UCOFT Tank Table VIII gunnery test. A Solomon four-group design was used. A 2 x 2 analysis of variance was used to assess whether interference gunnery training resulted in improvements in interference posttest scores. During the first three weeks of training, the interference group committed 106% more systems errors and 75% more acquisition errors than the standard group. The interference training condition was associated with a significant improvement from pre- to posttest of 44% in over-all UCOFT scores; however, when examined on the posttest the standard training did not improve performance significantly over the same period. It was concluded that auditory radio interference degrades performance of this fine, continuous, open motor skill, and interference training appears to abate the effects of this degradation.

  4. Prediction and stability of mathematics skill and difficulty.

    PubMed

    Martin, Rebecca B; Cirino, Paul T; Barnes, Marcia A; Ewing-Cobbs, Linda; Fuchs, Lynn S; Stuebing, Karla K; Fletcher, Jack M

    2013-01-01

    The present study evaluated the stability of math learning difficulties over a 2-year period and investigated several factors that might influence this stability (categorical vs. continuous change, liberal vs. conservative cut point, broad vs. specific math assessment); the prediction of math performance over time and by performance level was also evaluated. Participants were 144 students initially identified as having a math difficulty (MD) or no learning difficulty according to low achievement criteria in the spring of Grade 3 or Grade 4. Students were reassessed 2 years later. For both measure types, a similar proportion of students changed whether assessed categorically or continuously. However, categorical change was heavily dependent on distance from the cut point and so more common for MD, who started closer to the cut point; reliable change index change was more similar across groups. There were few differences with regard to severity level of MD on continuous metrics or in terms of prediction. Final math performance on a broad computation measure was predicted by behavioral inattention and working memory while considering initial performance; for a specific fluency measure, working memory was not uniquely related, and behavioral inattention more variably related to final performance, again while considering initial performance.

  5. Teleportation of Two-Mode Quantum State of Continuous Variables

    NASA Astrophysics Data System (ADS)

    Song, Tong-Qiang

    2004-03-01

    Using two Einstein-Podolsky-Rosen pair eigenstates |η> as quantum channels, we study the teleportation of two-mode quantum state of continuous variables. The project supported by Natural Science Foundation of Zhejiang Province of China and Open Foundation of Laboratory of High-Intensity Optics, Shanghai Institute of Optics and Fine Mechanics

  6. Determination of continuous variable entanglement by purity measurements.

    PubMed

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2004-02-27

    We classify the entanglement of two-mode Gaussian states according to their degree of total and partial mixedness. We derive exact bounds that determine maximally and minimally entangled states for fixed global and marginal purities. This characterization allows for an experimentally reliable estimate of continuous variable entanglement based on measurements of purity.

  7. Unconditional security proof of long-distance continuous-variable quantum key distribution with discrete modulation.

    PubMed

    Leverrier, Anthony; Grangier, Philippe

    2009-05-08

    We present a continuous-variable quantum key distribution protocol combining a discrete modulation and reverse reconciliation. This protocol is proven unconditionally secure and allows the distribution of secret keys over long distances, thanks to a reverse reconciliation scheme efficient at very low signal-to-noise ratio.

  8. Quantum error correction of continuous-variable states against Gaussian noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ralph, T. C.

    2011-08-15

    We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.

  9. Common pitfalls in statistical analysis: Linear regression analysis

    PubMed Central

    Aggarwal, Rakesh; Ranganathan, Priya

    2017-01-01

    In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis. PMID:28447022

  10. Modelling Spatial Dependence Structures Between Climate Variables by Combining Mixture Models with Copula Models

    NASA Astrophysics Data System (ADS)

    Khan, F.; Pilz, J.; Spöck, G.

    2017-12-01

    Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan, Mixture models, EM algorithm.

  11. A constitutive law for continuous fiber reinforced brittle matrix composites with fiber fragmentation and stress recovery

    NASA Astrophysics Data System (ADS)

    Neumeister, Jonas M.

    1993-08-01

    THE TENSILE BEHAVIOR of a brittle matrix composite is studied for post matrix crack saturation conditions. Scatter of fiber strength following the Weibull distribution as well as the influence of the major microstructural variables is considered. The stress in a fiber is assumed to recover linearly around a failure due to a fiber-matrix interface behavior mainly ruled by friction. The constitutive behavior for such a composite is analysed. Results are given for a simplified and a refined approximate description and compared with an analysis resulting from the exact analytical theory of fiber fragmentation. It is shown that the stress-strain relation for the refined model excellently follows the exact solution and gives the location of the maximum to within 1% in both stress and strain; for most materials the agreement is even better. Also it is shown that all relations can be normalized to depend on only two variables; a stress reference and the Weibull exponent. For systems with low scatter in fiber strength the simplified model is sufficient to determine the stress maximum but not the postcritical behavior. In addition, the simplified model gives explicit analytical expressions for the maximum stress and corresponding strain. None of the models contain any volume dependence or statistical scatter, but the maximum stress given by the stress-strain relation constitutes an upper bound for the ultimate tensile strength of the composite.

  12. Secondary outcome analysis for data from an outcome-dependent sampling design.

    PubMed

    Pan, Yinghao; Cai, Jianwen; Longnecker, Matthew P; Zhou, Haibo

    2018-04-22

    Outcome-dependent sampling (ODS) scheme is a cost-effective way to conduct a study. For a study with continuous primary outcome, an ODS scheme can be implemented where the expensive exposure is only measured on a simple random sample and supplemental samples selected from 2 tails of the primary outcome variable. With the tremendous cost invested in collecting the primary exposure information, investigators often would like to use the available data to study the relationship between a secondary outcome and the obtained exposure variable. This is referred as secondary analysis. Secondary analysis in ODS designs can be tricky, as the ODS sample is not a random sample from the general population. In this article, we use the inverse probability weighted and augmented inverse probability weighted estimating equations to analyze the secondary outcome for data obtained from the ODS design. We do not make any parametric assumptions on the primary and secondary outcome and only specify the form of the regression mean models, thus allow an arbitrary error distribution. Our approach is robust to second- and higher-order moment misspecification. It also leads to more precise estimates of the parameters by effectively using all the available participants. Through simulation studies, we show that the proposed estimator is consistent and asymptotically normal. Data from the Collaborative Perinatal Project are analyzed to illustrate our method. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Clinical investigation of set-shifting subtypes in anorexia nervosa.

    PubMed

    Abbate-Daga, Giovanni; Buzzichelli, Sara; Marzola, Enrica; Amianto, Federico; Fassino, Secondo

    2014-11-30

    While evidence continues to accumulate on the relevance of cognitive inflexibility in anorexia nervosa (AN), its clinical correlates remain unclear. We aimed at examining the relationship between set-shifting and clinical variables (i.e., eating psychopathology, depression, and personality) in AN. Ninety-four individuals affected by AN and 59 healthy controls (HC) were recruited. All participants were assessed using: Eating Disorders Inventory-2 (EDI-2), Temperament and Character Inventory (TCI), Beck Depression Inventory (BDI), and Wisconsin Card Sorting Test (WCST). The AN group scored worse than HCs on set-shifting. According to their neuropsychological performances, AN patients were split into two groups corresponding to poor (N=30) and intact (N=64) set-shifting subtypes. Interoceptive awareness, impulse regulation, and maturity fears on the EDI-2 and depression on the BDI differed across all groups (HC, intact, and poor set-shifting subtype). Self-directedness on the TCI differed significantly among all groups. Cooperativeness and reward dependence differed instead only between HC and AN poor set-shifting subtype. After controlling for depression, only interoceptive awareness remained significant with reward dependence showing a trend towards statistical significance. These findings suggest that multiple clinical variables may be correlated with set-shifting performances in AN. The factors contributing to impaired cognitive inflexibility could be more complex than heretofore generally considered. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. [Salivary flow and psychoactive drug consumption in elderly people].

    PubMed

    Cabrera, Marcos Aparecido Sarria; Mesas, Arthur Eumann; Rossato, Luiz Angelo; Andrade, Selma Maffei de

    2007-01-01

    To analyze the association between low saliva flow rates and the use of psychoactive drugs among the elderly. A cross-sectional study was carried out with 267 elderly people from 60 to 74 years of age who lived in a borough of the city of Londrina, Paraná State, Brazil. Individuals with high functional dependence or restricted to bed were excluded. Saliva flow rate was the dependent variable with values under the first tercile being considered as low flow rates (less than 0.44 ml/min). The continuous use of psychoactive drugs (antidepressant, antiepileptic, sedative, antipsychotic, hypnotic or sedative-hypnotic drugs) was the independent variable. Multivariate analysis was performed taking into account gender, age and smoking status. The majority of the elderly were women (80.5%), with a mean age of 66.5 years. Use of psychoactive drugs was observed among 31 elderly (11.6%). Mean saliva flow rate was 0.76 ml/min, lower among users of psychoactive drugs (0.67 ml/min). In the multivariate analysis, use of psychoactive drugs was associated with low saliva flow rates (<0.44 ml/min), independent of gender, age or smoking. Results show that there is an association between use of psychoactive drugs and low saliva flow rates in this group of independent and non-institutionalized elderly. These conclusions stress the need of a rational use of these drugs, particularly among the elderly.

  15. Genetic-evolution-based optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  16. Timing in the Absence of Supraspinal Input I: Variable, but not Fixed, Spaced Stimulation of the Sciatic Nerve Undermines Spinally-Mediated Instrumental Learning

    PubMed Central

    Baumbauer, Kyle M.; Hoy, Kevin C.; Huie, John R.; Hughes, Abbey J.; Woller, Sarah A.; Puga, Denise A.; Setlow, Barry; Grau, James W.

    2008-01-01

    Rats with complete spinal transections are capable of acquiring a simple instrumentally trained response. If rats receive shock to one hindlimb when the limb is extended (controllable shock), the spinal cord will learn to hold the leg in a flexed position that minimizes shock exposure. If shock is delivered irrespective of leg position, subjects do not exhibit an increase in flexion duration and subsequently fail to learn when tested with controllable shock (learning deficit). Just 6 min of variable intermittent shock produces a learning deficit that lasts 24 hrs. Evidence suggests that the neural mechanisms underlying the learning deficit may be related to those involved in other instances of spinal plasticity (e.g., wind-up, long-term potentiation). The present paper begins to explore these relations by demonstrating that direct stimulation of the sciatic nerve also impairs instrumental learning. Six minutes of electrical stimulation (mono- or biphasic direct current [DC]) of the sciatic nerve in spinally transected rats produced a voltage-dependent learning deficit that persisted for 24 hr (Experiments 1–2) and was dependent on C-fiber activation (Experiment 7). Exposure to continuous stimulation did not produce a deficit, but intermittent burst or single pulse (as short as 0.1 ms) stimulation (delivered at a frequency of 0.5 Hz) did, irrespective of the pattern (fixed or variable) of stimulus delivery (Experiments 3–6, 8). When the duration of stimulation was extended from 6 to 30 min, a surprising result emerged; shocks applied in a random (variable) fashion impaired subsequent learning whereas shocks given in a regular pattern (fixed spacing) did not (Experiments 9–10). The results imply that spinal neurons are sensitive to temporal relations and that stimulation at regular intervals can have a restorative effect. PMID:18674601

  17. Regression Analysis with Dummy Variables: Use and Interpretation.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Oliver, J. Dale

    1986-01-01

    Multiple regression analysis (MRA) may be used when both continuous and categorical variables are included as independent research variables. The use of MRA with categorical variables involves dummy coding, that is, assigning zeros and ones to levels of categorical variables. Caution is urged in results interpretation. (Author/CH)

  18. Nutrient movement in a 104-year old soil fertility experiment

    USDA-ARS?s Scientific Manuscript database

    Alabama’s “Cullars Rotation” experiment (circa 1911) is the oldest, continuous soil fertility experiment in the southern U.S. Treatments include 5 K variables, P variables, S variables, soil pH variables and micronutrient variables in 14 treatments involving a 3-yr rotation of (1) cotton-winter legu...

  19. Insurance coverage and financial burden for families of children with special health care needs.

    PubMed

    Chen, Alex Y; Newacheck, Paul W

    2006-01-01

    To examine the role of insurance coverage in protecting families of children with special health care needs (CSHCN) from the financial burden associated with care. Data from the 2001 National Survey of Children with Special Health Care Needs were analyzed. We built 2 multivariate regression models by using "work loss/cut back" and "experiencing financial problems" as the dependent variables, and insurance status as the primary independent variable of interest while adjusting for income, race/ethnicity, functional limitation/severity, and other sociodemographic predictors. Approximately 29.9% of CSHCN live in families where their condition led parents to report cutting back on work or stopping work completely. Families of 20.9% of CSHCN reported experiencing financial difficulties due to the child's condition. Insurance coverage significantly reduced the likelihood of financial problems for families at every income level. The proportion of families experiencing financial problems was reduced from 35.7% to 23.0% for the poor and 44.9% to 24.5% for low-income families with continuous insurance coverage (P < .01 for both comparisons). Similarly, the proportion of parents having to cut back or stop work was reduced from 42.8% to 35.9% for the poor (P < .05) and 43.5% to 33.9% for low-income families (P < .01). Continuous health insurance coverage provides protection from financial burden and hardship for families of CSHCN in all income groups. This evidence is supportive of policies designed to promote universal coverage for CSHCN. However, many poor and low-income families continue to experience work loss and financial problems despite insurance coverage. Hence, health insurance should not be viewed as a solution in itself, but instead as one element of a comprehensive strategy to provide financial safety for families with CSHCN.

  20. 12-step affiliation and attendance following treatment for comorbid substance dependence and depression: a latent growth curve mediation model.

    PubMed

    Worley, Matthew J; Tate, Susan R; McQuaid, John R; Granholm, Eric L; Brown, Sandra A

    2013-01-01

    Among substance-dependent individuals, comorbid major depressive disorder (MDD) is associated with greater severity and poorer treatment outcomes, but little research has examined mediators of posttreatment substance use outcomes within this population. Using latent growth curve models, the authors tested relationships between individual rates of change in 12-step involvement and substance use, utilizing posttreatment follow-up data from a trial of group Twelve-Step Facilitation (TSF) and integrated cognitive-behavioral therapy (ICBT) for veterans with substance dependence and MDD. Although TSF patients were higher on 12-step affiliation and meeting attendance at end-of-treatment as compared with ICBT, they also experienced significantly greater reductions in these variables during the year following treatment, ending at similar levels as ICBT. Veterans in TSF also had significantly greater increases in drinking frequency during follow-up, and this group difference was mediated by their greater reductions in 12-step affiliation and meeting attendance. Patients with comorbid depression appear to have difficulty sustaining high levels of 12-step involvement after the conclusion of formal 12-step interventions, which predicts poorer drinking outcomes over time. Modifications to TSF and other formal 12-step protocols or continued therapeutic contact may be necessary to sustain 12-step involvement and reduced drinking for patients with substance dependence and MDD.

  1. Long-term success of oral health intervention among care-dependent institutionalized seniors: Findings from a controlled clinical trial.

    PubMed

    Schwindling, Franz Sebastian; Krisam, Johannes; Hassel, Alexander J; Rammelsberg, Peter; Zenthöfer, Andreas

    2018-04-01

    The purpose of this work was to investigate the long-term effectiveness of oral health education of caregivers in nursing homes with care-dependent and cognitively impaired residents. Fourteen nursing homes with a total of 269 residents were allocated to a control group, with continued normal care, or to an intervention group. Allocation was performed at nursing home level. In the intervention group, caregivers were given oral health education, and ultrasonic cleaning devices were provided to clean removable prostheses. Oral health was assessed at baseline and after 6 and 12 months by use of the Plaque Control Record (PCR), Gingival Bleeding Index (GBI), Community Periodontal Index of Treatment Needs (CPITN) and Denture Hygiene Index (DHI). Mixed models for repeated measures were performed for each target variable, with possible confounding factors (intervention/control group, age, sex, residence location and care-dependence). In the control group, no changes of target variables were observed between baseline and the 6- and 12-month follow-ups. After 6 and 12 months, PCR and DHI were significantly improved in the intervention group. For PCR, the intergroup difference of improvements was -14.4 (95% CI: -21.8; -6.9) after 6 months. After 12 months, the difference was -16.2 (95% CI: -27.7; -4.7). For DHI, the intergroup difference compared to baseline was -15 (95% CI: -23.6; -6.5) after 6 months and -13.3 (95% CI: -24.9; -1.8) after 12 months. There was neither a statistically significant effect on GBI nor on CPITN. Care-dependency showed a substantial trend to smaller improvements in PCR (P = .074), while an inverse effect was apparent for DHI (P < .001). Education of caregivers improves and maintains the oral health of care-dependent nursing home residents over longer periods. Use of ultrasonic devices is a promising means of improving denture hygiene among the severely care-dependent. Such interventions can be easily and cheaply implemented in routine daily care. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Unadjusted Bivariate Two-Group Comparisons: When Simpler is Better.

    PubMed

    Vetter, Thomas R; Mascha, Edward J

    2018-01-01

    Hypothesis testing involves posing both a null hypothesis and an alternative hypothesis. This basic statistical tutorial discusses the appropriate use, including their so-called assumptions, of the common unadjusted bivariate tests for hypothesis testing and thus comparing study sample data for a difference or association. The appropriate choice of a statistical test is predicated on the type of data being analyzed and compared. The unpaired or independent samples t test is used to test the null hypothesis that the 2 population means are equal, thereby accepting the alternative hypothesis that the 2 population means are not equal. The unpaired t test is intended for comparing dependent continuous (interval or ratio) data from 2 study groups. A common mistake is to apply several unpaired t tests when comparing data from 3 or more study groups. In this situation, an analysis of variance with post hoc (posttest) intragroup comparisons should instead be applied. Another common mistake is to apply a series of unpaired t tests when comparing sequentially collected data from 2 study groups. In this situation, a repeated-measures analysis of variance, with tests for group-by-time interaction, and post hoc comparisons, as appropriate, should instead be applied in analyzing data from sequential collection points. The paired t test is used to assess the difference in the means of 2 study groups when the sample observations have been obtained in pairs, often before and after an intervention in each study subject. The Pearson chi-square test is widely used to test the null hypothesis that 2 unpaired categorical variables, each with 2 or more nominal levels (values), are independent of each other. When the null hypothesis is rejected, 1 concludes that there is a probable association between the 2 unpaired categorical variables. When comparing 2 groups on an ordinal or nonnormally distributed continuous outcome variable, the 2-sample t test is usually not appropriate. The Wilcoxon-Mann-Whitney test is instead preferred. When making paired comparisons on data that are ordinal, or continuous but nonnormally distributed, the Wilcoxon signed-rank test can be used. In analyzing their data, researchers should consider the continued merits of these simple yet equally valid unadjusted bivariate statistical tests. However, the appropriate use of an unadjusted bivariate test still requires a solid understanding of its utility, assumptions (requirements), and limitations. This understanding will mitigate the risk of misleading findings, interpretations, and conclusions.

  3. Data compression: The end-to-end information systems perspective for NASA space science missions

    NASA Technical Reports Server (NTRS)

    Tai, Wallace

    1991-01-01

    The unique characteristics of compressed data have important implications to the design of space science data systems, science applications, and data compression techniques. The sequential nature or data dependence between each of the sample values within a block of compressed data introduces an error multiplication or propagation factor which compounds the effects of communication errors. The data communication characteristics of the onboard data acquisition, storage, and telecommunication channels may influence the size of the compressed blocks and the frequency of included re-initialization points. The organization of the compressed data are continually changing depending on the entropy of the input data. This also results in a variable output rate from the instrument which may require buffering to interface with the spacecraft data system. On the ground, there exist key tradeoff issues associated with the distribution and management of the science data products when data compression techniques are applied in order to alleviate the constraints imposed by ground communication bandwidth and data storage capacity.

  4. Toward a Network Model of MHC Class II-Restricted Antigen Processing

    PubMed Central

    Miller, Michael A.; Ganesan, Asha Purnima V.; Eisenlohr, Laurence C.

    2013-01-01

    The standard model of Major Histocompatibility Complex class II (MHCII)-restricted antigen processing depicts a straightforward, linear pathway: internalized antigens are converted into peptides that load in a chaperone dependent manner onto nascent MHCII in the late endosome, the complexes subsequently trafficking to the cell surface for recognition by CD4+ T cells (TCD4+). Several variations on this theme, both moderate and radical, have come to light but these alternatives have remained peripheral, the conventional pathway generally presumed to be the primary driver of TCD4+ responses. Here we continue to press for the conceptual repositioning of these alternatives toward the center while proposing that MHCII processing be thought of less in terms of discrete pathways and more in terms of a network whose major and minor conduits are variable depending upon many factors, including the epitope, the nature of the antigen, the source of the antigen, and the identity of the antigen-presenting cell. PMID:24379819

  5. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, Rupam; Huang, Zhi-Feng; Nadgorny, Boris

    Multiple percolation transitions are observed in a binary system of RuO{sub 2}-CaCu{sub 3}Ti{sub 4}O{sub 12} metal-semiconductor nanoparticle composites near percolation thresholds. Apart from a classical percolation transition, associated with the appearance of a continuous conductance path through RuO{sub 2} metal oxide nanoparticles, at least two additional tunneling percolation transitions are detected in this composite system. Such behavior is consistent with the recently emerged picture of a quantum conductivity staircase, which predicts several percolation tunneling thresholds in a system with a hierarchy of local tunneling conductance, due to various degrees of proximity of adjacent conducting particles distributed in an insulating matrix.more » Here, we investigate a different type of percolation tunneling staircase, associated with a more complex conductive and insulating particle microstructure of two types of non-spherical constituents. As tunneling is strongly temperature dependent, we use variable temperature measurements to emphasize the hierarchical nature of consecutive tunneling transitions. The critical exponents corresponding to specific tunneling percolation thresholds are found to be nonuniversal and temperature dependent.« less

  7. Solution of magnetohydrodynamic flow and heat transfer of radiative viscoelastic fluid with temperature dependent viscosity in wire coating analysis

    PubMed Central

    Khan, Muhammad Altaf; Siddiqui, Nasir; Ullah, Murad; Shah, Qayyum

    2018-01-01

    Wire coating process is a continuous extrusion process for primary insulation of conducting wires with molten polymers for mechanical strength and protection in aggressive environments. In the present study, radiative melt polymer satisfying third grade fluid model is used for wire coating process. The effect of magnetic parameter, thermal radiation parameter and temperature dependent viscosity on wire coating analysis has been investigated. Reynolds model and Vogel’s models have been incorporated for variable viscosity. The governing equations characterizing the flow and heat transfer phenomena are solved analytically by utilizing homotopy analysis method (HAM). The computed results are also verified by ND-Solve method (Numerical technique) and Adomian Decomposition Method (ADM). The effect of pertinent parameters is shown graphically. In addition, the instability of the flow in the flows of the wall of the extrusion die is well marked in the case of the Vogel model as pointed by Nhan-Phan-Thien. PMID:29596448

  8. Multivariate normal maximum likelihood with both ordinal and continuous variables, and data missing at random.

    PubMed

    Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C

    2018-04-01

    A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.

  9. On the continuous differentiability of inter-spike intervals of synaptically connected cortical spiking neurons in a neuronal network.

    PubMed

    Kumar, Gautam; Kothare, Mayuresh V

    2013-12-01

    We derive conditions for continuous differentiability of inter-spike intervals (ISIs) of spiking neurons with respect to parameters (decision variables) of an external stimulating input current that drives a recurrent network of synaptically connected neurons. The dynamical behavior of individual neurons is represented by a class of discontinuous single-neuron models. We report here that ISIs of neurons in the network are continuously differentiable with respect to decision variables if (1) a continuously differentiable trajectory of the membrane potential exists between consecutive action potentials with respect to time and decision variables and (2) the partial derivative of the membrane potential of spiking neurons with respect to time is not equal to the partial derivative of their firing threshold with respect to time at the time of action potentials. Our theoretical results are supported by showing fulfillment of these conditions for a class of known bidimensional spiking neuron models.

  10. A 24 km fiber-based discretely signaled continuous variable quantum key distribution system.

    PubMed

    Dinh Xuan, Quyen; Zhang, Zheshen; Voss, Paul L

    2009-12-21

    We report a continuous variable key distribution system that achieves a final secure key rate of 3.45 kilobits/s over a distance of 24.2 km of optical fiber. The protocol uses discrete signaling and post-selection to improve reconciliation speed and quantifies security by means of quantum state tomography. Polarization multiplexing and a frequency translation scheme permit transmission of a continuous wave local oscillator and suppression of noise from guided acoustic wave Brillouin scattering by more than 27 dB.

  11. The association between genetic variants of RUNX2, ADIPOQ and vertebral fracture in Korean postmenopausal women.

    PubMed

    Kim, Kyong-Chol; Chun, Hyejin; Lai, ChaoQiang; Parnell, Laurence D; Jang, Yangsoo; Lee, Jongho; Ordovas, Jose M

    2015-03-01

    Contrary to the traditional belief that obesity acts as a protective factor for bone, recent epidemiologic studies have shown that body fat might be a risk factor for osteoporosis and bone fracture. Accordingly, we evaluated the association between the phenotypes of osteoporosis or vertebral fracture and variants of obesity-related genes, peroxisome proliferator-activated receptor-gamma (PPARG), runt-related transcription factor 2 (RUNX2), leptin receptor (LEPR), and adiponectin (ADIPOQ). In total, 907 postmenopausal healthy women, aged 60-79 years, were included in this study. BMD and biomarkers of bone health and adiposity were measured. We genotyped for four single nucleotide polymorphisms (SNPs) from four genes (PPARG, RUNX2, LEPR, ADIPOQ). A general linear model for continuous dependent variables and a logistic regression model for categorical dependent variables were used to analyze the statistical differences among genotype groups. Compared with the TT subjects at rs7771980 in RUNX2, C-carrier (TC + CC) subjects had a lower vertebral fracture risk after adjusting for age, smoking, alcohol, total calorie intake, total energy expenditure, total calcium intake, total fat intake, weight, body fat. Odds ratio (OR) and 95% interval (CI) for the vertebral fracture risk was 0.55 (95% CI 0.32-0.94). After adjusting for multiple variables, the prevalence of vertebral fracture was highest in GG subjects at rs1501299 in ADIPOQ (p = 0.0473). A high calcium intake (>1000 mg/day) contributed to a high bone mineral density (BMD) in GT + TT subjects at rs1501299 in ADIPOQ (p for interaction = 0.0295). Even if the mechanisms between obesity-related genes and bone health are not fully established, the results of our study revealed the association of certain SNPs from obesity-related genes with BMD or vertebral fracture risk in postmenopausal Korean women.

  12. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    PubMed

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.

  13. The Effect of 2 Forms of Talocrural Joint Traction on Dorsiflexion Range of Motion and Postural Control in Those With Chronic Ankle Instability.

    PubMed

    Powden, Cameron J; Hogan, Kathleen K; Wikstrom, Erik A; Hoch, Matthew C

    2017-05-01

    Talocrural joint mobilizations are commonly used to address deficits associated with chronic ankle instability (CAI). Examine the immediate effects of talocrural joint traction in those with CAI. Blinded, crossover. Laboratory. Twenty adults (14 females; age = 23.80 ± 4.02 y; height = 169.55 ± 12.38 cm; weight = 78.34 ± 16.32 kg) with self-reported CAI participated. Inclusion criteria consisted of a history of ≥1 ankle sprain, ≥2 episodes of giving way in the previous 3 mo, answering "yes" to ≥4 questions on the Ankle Instability Instrument, and ≤24 on the Cumberland Ankle Instability Tool. Subjects participated in 3 sessions in which they received a single treatment session of sustained traction (ST), oscillatory traction (OT), or a sham condition in a randomized order. Interventions consisted of four 30-s sets of traction with 1 min of rest between sets. During ST and OT, the talus was distracted distally from the ankle mortise to the end-range of accessory motion. ST consisted of continuous distraction and OT involved 1-s oscillations between the mid and end-range of accessory motion. The sham condition consisted of physical contact without force application. Preintervention and postintervention measurements of weight-bearing dorsiflexion, dynamic balance, and static single-limb balance were collected. The independent variable was treatment (ST, OT, sham). The dependent variables included pre-to-posttreatment change scores for the WBLT (cm), normalized SEBTAR (%), and time-to-boundary (TTB) variables(s). Separate 1-way ANOVAs examined differences between treatments for each dependent variable. Alpha was set a priori at P < .05. No significant treatment effects were identified for any variables. A single intervention of ST or OT did not produce significant changes in weight-bearing dorsiflexion range of motion or postural control in individuals with CAI. Future research should investigate the effects of repeated talocrural traction treatments and the effects of this technique when combined with other manual therapies.

  14. A comparison of multiple imputation methods for handling missing values in longitudinal data in the presence of a time-varying covariate with a non-linear association with time: a simulation study.

    PubMed

    De Silva, Anurika Priyanjali; Moreno-Betancur, Margarita; De Livera, Alysha Madhu; Lee, Katherine Jane; Simpson, Julie Anne

    2017-07-25

    Missing data is a common problem in epidemiological studies, and is particularly prominent in longitudinal data, which involve multiple waves of data collection. Traditional multiple imputation (MI) methods (fully conditional specification (FCS) and multivariate normal imputation (MVNI)) treat repeated measurements of the same time-dependent variable as just another 'distinct' variable for imputation and therefore do not make the most of the longitudinal structure of the data. Only a few studies have explored extensions to the standard approaches to account for the temporal structure of longitudinal data. One suggestion is the two-fold fully conditional specification (two-fold FCS) algorithm, which restricts the imputation of a time-dependent variable to time blocks where the imputation model includes measurements taken at the specified and adjacent times. To date, no study has investigated the performance of two-fold FCS and standard MI methods for handling missing data in a time-varying covariate with a non-linear trajectory over time - a commonly encountered scenario in epidemiological studies. We simulated 1000 datasets of 5000 individuals based on the Longitudinal Study of Australian Children (LSAC). Three missing data mechanisms: missing completely at random (MCAR), and a weak and a strong missing at random (MAR) scenarios were used to impose missingness on body mass index (BMI) for age z-scores; a continuous time-varying exposure variable with a non-linear trajectory over time. We evaluated the performance of FCS, MVNI, and two-fold FCS for handling up to 50% of missing data when assessing the association between childhood obesity and sleep problems. The standard two-fold FCS produced slightly more biased and less precise estimates than FCS and MVNI. We observed slight improvements in bias and precision when using a time window width of two for the two-fold FCS algorithm compared to the standard width of one. We recommend the use of FCS or MVNI in a similar longitudinal setting, and when encountering convergence issues due to a large number of time points or variables with missing values, the two-fold FCS with exploration of a suitable time window.

  15. Hybrid Methods in Quantum Information

    NASA Astrophysics Data System (ADS)

    Marshall, Kevin

    Today, the potential power of quantum information processing comes as no surprise to physicist or science-fiction writer alike. However, the grand promises of this field remain unrealized, despite significant strides forward, due to the inherent difficulties of manipulating quantum systems. Simply put, it turns out that it is incredibly difficult to interact, in a controllable way, with the quantum realm when we seem to live our day to day lives in a classical world. In an effort to solve this challenge, people are exploring a variety of different physical platforms, each with their strengths and weaknesses, in hopes of developing new experimental methods that one day might allow us to control a quantum system. One path forward rests in combining different quantum systems in novel ways to exploit the benefits of different systems while circumventing their respective weaknesses. In particular, quantum systems come in two different flavours: either discrete-variable systems or continuous-variable ones. The field of hybrid quantum information seeks to combine these systems, in clever ways, to help overcome the challenges blocking the path between what is theoretically possible and what is achievable in a laboratory. In this thesis we explore four topics in the context of hybrid methods in quantum information, in an effort to contribute to the resolution of existing challenges and to stimulate new avenues of research. First, we explore the manipulation of a continuous-variable quantum system consisting of phonons in a linear chain of trapped ions where we use the discretized internal levels to mediate interactions. Using our proposed interaction we are able to implement, for example, the acoustic equivalent of a beam splitter with modest experimental resources. Next we propose an experimentally feasible implementation of the cubic phase gate, a primitive non-Gaussian gate required for universal continuous-variable quantum computation, based off sequential photon subtraction. We then discuss the notion of embedding a finite dimensional state into a continuous-variable system, and propose a method of performing quantum computations on encrypted continuous-variable states. This protocol allows for a client, of limited quantum ability, to outsource a computation while hiding their information. Next, we discuss the possibility of performing universal quantum computation on discrete-variable logical states encoded in mixed continuous-variable quantum states. Finally, we present an account of open problems related to our results, and possible future avenues of research.

  16. Metabolomics variable selection and classification in the presence of observations below the detection limit using an extension of ERp.

    PubMed

    van Reenen, Mari; Westerhuis, Johan A; Reinecke, Carolus J; Venter, J Hendrik

    2017-02-02

    ERp is a variable selection and classification method for metabolomics data. ERp uses minimized classification error rates, based on data from a control and experimental group, to test the null hypothesis of no difference between the distributions of variables over the two groups. If the associated p-values are significant they indicate discriminatory variables (i.e. informative metabolites). The p-values are calculated assuming a common continuous strictly increasing cumulative distribution under the null hypothesis. This assumption is violated when zero-valued observations can occur with positive probability, a characteristic of GC-MS metabolomics data, disqualifying ERp in this context. This paper extends ERp to address two sources of zero-valued observations: (i) zeros reflecting the complete absence of a metabolite from a sample (true zeros); and (ii) zeros reflecting a measurement below the detection limit. This is achieved by allowing the null cumulative distribution function to take the form of a mixture between a jump at zero and a continuous strictly increasing function. The extended ERp approach is referred to as XERp. XERp is no longer non-parametric, but its null distributions depend only on one parameter, the true proportion of zeros. Under the null hypothesis this parameter can be estimated by the proportion of zeros in the available data. XERp is shown to perform well with regard to bias and power. To demonstrate the utility of XERp, it is applied to GC-MS data from a metabolomics study on tuberculosis meningitis in infants and children. We find that XERp is able to provide an informative shortlist of discriminatory variables, while attaining satisfactory classification accuracy for new subjects in a leave-one-out cross-validation context. XERp takes into account the distributional structure of data with a probability mass at zero without requiring any knowledge of the detection limit of the metabolomics platform. XERp is able to identify variables that discriminate between two groups by simultaneously extracting information from the difference in the proportion of zeros and shifts in the distributions of the non-zero observations. XERp uses simple rules to classify new subjects and a weight pair to adjust for unequal sample sizes or sensitivity and specificity requirements.

  17. Model for the techno-economic analysis of common work of wind power and CCGT power plant to offer constant level of power in the electricity market

    NASA Astrophysics Data System (ADS)

    Tomsic, Z.; Rajsl, I.; Filipovic, M.

    2017-11-01

    Wind power varies over time, mainly under the influence of meteorological fluctuations. The variations occur on all time scales. Understanding these variations and their predictability is of key importance for the integration and optimal utilization of wind in the power system. There are two major attributes of variable generation that notably impact the participation on power exchanges: Variability (the output of variable generation changes and resulting in fluctuations in the plant output on all time scales) and Uncertainty (the magnitude and timing of variable generation output is less predictable, wind power output has low levels of predictability). Because of these variability and uncertainty wind plants cannot participate to electricity market, especially to power exchanges. For this purpose, the paper presents techno-economic analysis of work of wind plants together with combined cycle gas turbine (CCGT) plant as support for offering continues power to electricity market. A model of wind farms and CCGT plant was developed in program PLEXOS based on real hourly input data and all characteristics of CCGT with especial analysis of techno-economic characteristics of different types of starts and stops of the plant. The Model analyzes the followings: costs of different start-stop characteristics (hot, warm, cold start-ups and shutdowns) and part load performance of CCGT. Besides the costs, the technical restrictions were considered such as start-up time depending on outage duration, minimum operation time, and minimum load or peaking capability. For calculation purposes, the following parameters are necessary to know in order to be able to economically evaluate changes in the start-up process: ramp up and down rate, time of start time reduction, fuel mass flow during start, electricity production during start, variable cost of start-up process, cost and charges for life time consumption for each start and start type, remuneration during start up time regarding expected or unexpected starts, the cost and revenues for balancing energy (important when participating in electricity market), and the cost or revenues for CO2-certificates. Main motivation for this analysis is to investigate possibilities to participate on power exchanges by offering continues guarantied power from wind plants by backing-up them with CCGT power plant.

  18. Mood, mood regulation expectancies and frontal systems functioning in current smokers versus never-smokers in China and Australia.

    PubMed

    Lyvers, Michael; Carlopio, Cassandra; Bothma, Vicole; Edwards, Mark S

    2013-11-01

    Indices of mood, mood regulation expectancies and everyday executive functioning were examined in adult current smokers and never-smokers of both genders in Australia (N = 97), where anti-smoking campaigns have dramatically reduced smoking prevalence and acceptability, and in China (N = 222), where smoking prevalence and public acceptance of smoking remain high. Dependent measures included the Depression Anxiety Stress Scales (DASS-21), the Negative Mood Regulation (NMR) expectancies scale, the Frontal Systems Behavior Scale (FrSBe), the Fagerström Test for Nicotine Dependence (FTND) and the Alcohol Use Disorders Identification Test (AUDIT). Multivariate analyses of covariance (MANCOVAs) controlling for demographic and recruitment related variables revealed highly significant differences between current smokers and never-smokers in both countries such that smokers indicated worse moods and poorer functioning than never-smokers on all dependent measures. Chinese smokers scored significantly worse on all dependent measures than Australian smokers whereas Chinese and Australian never-smokers did not differ on any of the same measures. Although nicotine dependence level as measured by FTND was significantly higher in Chinese than Australian smokers and was significantly correlated with all other dependent measures, inclusion of FTND scores as another covariate in MANCOVA did not eliminate the highly significant differences between Chinese and Australian smokers. Results are interpreted in light of the relative ease of taking up and continuing smoking in China compared to Australia today. © 2013 Elsevier Ltd. All rights reserved.

  19. Do nurses wish to continue working for the UK National Health Service? A comparative study of three generations of nurses.

    PubMed

    Robson, Andrew; Robson, Fiona

    2015-01-01

    To identify the combination of variables that explain nurses' continuation intention in the UK National Health Service. This alternative arena has permitted the replication of a private sector Australian study. This study provides understanding about the issues that affect nurse retention in a sector where employee attrition is a key challenge, further exacerbated by an ageing workforce. A quantitative study based on a self-completion survey questionnaire completed in 2010. Nurses employed in two UK National Health Service Foundation Trusts were surveyed and assessed using seven work-related constructs and various demographics including age generation. Through correlation, multiple regression and stepwise regression analysis, the potential combined effect of various explanatory variables on continuation intention was assessed, across the entire nursing cohort and in three age-generation groups. Three variables act in combination to explain continuation intention: work-family conflict, work attachment and importance of work to the individual. This combination of significant explanatory variables was consistent across the three generations of nursing employee. Work attachment was identified as the strongest marginal predictor of continuation intention. Work orientation has a greater impact on continuation intention compared with employer-directed interventions such as leader-member exchange, teamwork and autonomy. UK nurses are homogeneous across the three age-generations regarding explanation of continuation intention, with the significant explanatory measures being recognizably narrower in their focus and more greatly concentrated on the individual. This suggests that differentiated approaches to retention should perhaps not be pursued in this sectoral context. © 2014 John Wiley & Sons Ltd.

  20. 26 CFR 1.467-5 - Section 467 rental agreements with variable interest.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 6 2010-04-01 2010-04-01 false Section 467 rental agreements with variable interest. 1.467-5 Section 1.467-5 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Taxable Year for Which Deductions Taken § 1.467-5 Section 467...

  1. Multivariate dynamic Tobit models with lagged observed dependent variables: An effectiveness analysis of highway safety laws.

    PubMed

    Dong, Chunjiao; Xie, Kun; Zeng, Jin; Li, Xia

    2018-04-01

    Highway safety laws aim to influence driver behaviors so as to reduce the frequency and severity of crashes, and their outcomes. For one specific highway safety law, it would have different effects on the crashes across severities. Understanding such effects can help policy makers upgrade current laws and hence improve traffic safety. To investigate the effects of highway safety laws on crashes across severities, multivariate models are needed to account for the interdependency issues in crash counts across severities. Based on the characteristics of the dependent variables, multivariate dynamic Tobit (MVDT) models are proposed to analyze crash counts that are aggregated at the state level. Lagged observed dependent variables are incorporated into the MVDT models to account for potential temporal correlation issues in crash data. The state highway safety law related factors are used as the explanatory variables and socio-demographic and traffic factors are used as the control variables. Three models, a MVDT model with lagged observed dependent variables, a MVDT model with unobserved random variables, and a multivariate static Tobit (MVST) model are developed and compared. The results show that among the investigated models, the MVDT models with lagged observed dependent variables have the best goodness-of-fit. The findings indicate that, compared to the MVST, the MVDT models have better explanatory power and prediction accuracy. The MVDT model with lagged observed variables can better handle the stochasticity and dependency in the temporal evolution of the crash counts and the estimated values from the model are closer to the observed values. The results show that more lives could be saved if law enforcement agencies can make a sustained effort to educate the public about the importance of motorcyclists wearing helmets. Motor vehicle crash-related deaths, injuries, and property damages could be reduced if states enact laws for stricter text messaging rules, higher speeding fines, older licensing age, and stronger graduated licensing provisions. Injury and PDO crashes would be significantly reduced with stricter laws prohibiting the use of hand-held communication devices and higher fines for drunk driving. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Satisfying the Einstein-Podolsky-Rosen criterion with massive particles

    NASA Astrophysics Data System (ADS)

    Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.

    2016-03-01

    In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, as shown successfully with light fields. Here, we report on the production of massive particles which meet the EPR criterion for continuous phase/amplitude variables. The created quantum state of ultracold atoms shows an EPR parameter of 0.18(3), which is 2.4 standard deviations below the threshold of 1/4. Our state presents a resource for tests of quantum nonlocality with massive particles and a wide variety of applications in the field of continuous-variable quantum information and metrology.

  3. Testing for entanglement with periodic coarse graining

    NASA Astrophysics Data System (ADS)

    Tasca, D. S.; Rudnicki, Łukasz; Aspden, R. S.; Padgett, M. J.; Souto Ribeiro, P. H.; Walborn, S. P.

    2018-04-01

    Continuous-variable systems find valuable applications in quantum information processing. To deal with an infinite-dimensional Hilbert space, one in general has to handle large numbers of discretized measurements in tasks such as entanglement detection. Here we employ the continuous transverse spatial variables of photon pairs to experimentally demonstrate entanglement criteria based on a periodic structure of coarse-grained measurements. The periodization of the measurements allows an efficient evaluation of entanglement using spatial masks acting as mode analyzers over the entire transverse field distribution of the photons and without the need to reconstruct the probability densities of the conjugate continuous variables. Our experimental results demonstrate the utility of the derived criteria with a success rate in entanglement detection of ˜60 % relative to 7344 studied cases.

  4. Composable security proof for continuous-variable quantum key distribution with coherent States.

    PubMed

    Leverrier, Anthony

    2015-02-20

    We give the first composable security proof for continuous-variable quantum key distribution with coherent states against collective attacks. Crucially, in the limit of large blocks the secret key rate converges to the usual value computed from the Holevo bound. Combining our proof with either the de Finetti theorem or the postselection technique then shows the security of the protocol against general attacks, thereby confirming the long-standing conjecture that Gaussian attacks are optimal asymptotically in the composable security framework. We expect that our parameter estimation procedure, which does not rely on any assumption about the quantum state being measured, will find applications elsewhere, for instance, for the reliable quantification of continuous-variable entanglement in finite-size settings.

  5. Finite-size analysis of a continuous-variable quantum key distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leverrier, Anthony; Grosshans, Frederic; Grangier, Philippe

    2010-06-15

    The goal of this paper is to extend the framework of finite-size analysis recently developed for quantum key distribution to continuous-variable protocols. We do not solve this problem completely here, and we mainly consider the finite-size effects on the parameter estimation procedure. Despite the fact that some questions are left open, we are able to give an estimation of the secret key rate for protocols which do not contain a postselection procedure. As expected, these results are significantly more pessimistic than those obtained in the asymptotic regime. However, we show that recent continuous-variable protocols are able to provide fully securemore » secret keys in the finite-size scenario, over distances larger than 50 km.« less

  6. A novel approach for analyzing data on recurrent events with duration to estimate the combined cumulative rate of both variables over time.

    PubMed

    Bhattacharya, Sudipta

    2018-06-01

    Recurrent adverse events, once occur often continue for some duration of time in clinical trials; and the number of events along with their durations is clinically considered as a measure of severity of a disease under study. While there are methods available for analyzing recurrent events or durations or for analyzing both side by side, no effort has been made so far to combine them and present as a single measure. However, this single-valued combined measure may help clinicians assess the wholesome effect of recurrence of incident comprising events and durations. Non-parametric approach is adapted here to develop an estimator for estimating the combined rate of both, the recurrence of events as well as the event-continuation, that is the duration per event. The proposed estimator produces a single numerical value, the interpretation and meaningfulness of which are discussed through the analysis of a real-life clinical dataset. The algebraic expression of variance is derived, asymptotic normality of the estimator is noted, and demonstration is provided on how the estimator can be used in the setup of testing of statistical hypothesis. Further possible development of the estimator is also noted, to adjust for the dependence of event occurrences on the history of the process generating recurrent events through covariates and for the case of dependent censoring.

  7. Continuing evaluation of bipolar linear devices for total dose bias dependency and ELDRS effects

    NASA Technical Reports Server (NTRS)

    McClure, Steven S.; Gorelick, Jerry L.; Yui, Candice; Rax, Bernard G.; Wiedeman, Michael D.

    2003-01-01

    We present results of continuing efforts to evaluate total dose bias dependency and ELDRS effects in bipolar linear microcircuits. Several devices were evaluated, each exhibiting moderate to significant bias and/or dose rate dependency.

  8. A computer graphics display and data compression technique

    NASA Technical Reports Server (NTRS)

    Teague, M. J.; Meyer, H. G.; Levenson, L. (Editor)

    1974-01-01

    The computer program discussed is intended for the graphical presentation of a general dependent variable X that is a function of two independent variables, U and V. The required input to the program is the variation of the dependent variable with one of the independent variables for various fixed values of the other. The computer program is named CRP, and the output is provided by the SD 4060 plotter. Program CRP is an extremely flexible program that offers the user a wide variety of options. The dependent variable may be presented in either a linear or a logarithmic manner. Automatic centering of the plot is provided in the ordinate direction, and the abscissa is scaled automatically for a logarithmic plot. A description of the carpet plot technique is given along with the coordinates system used in the program. Various aspects of the program logic are discussed and detailed documentation of the data card format is presented.

  9. 26 CFR 1.801-7 - Variable annuities.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...) INCOME TAXES (CONTINUED) Life Insurance Companies § 1.801-7 Variable annuities. (a) In general. (1... variable annuity contract vary with the insurance company's investment experience with respect to such.... Accordingly, a company issuing variable annuity contracts shall qualify as a life insurance company for...

  10. 26 CFR 1.801-7 - Variable annuities.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...) INCOME TAXES (CONTINUED) Life Insurance Companies § 1.801-7 Variable annuities. (a) In general. (1... variable annuity contract vary with the insurance company's investment experience with respect to such.... Accordingly, a company issuing variable annuity contracts shall qualify as a life insurance company for...

  11. 26 CFR 1.801-7 - Variable annuities.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...) INCOME TAXES (CONTINUED) Life Insurance Companies § 1.801-7 Variable annuities. (a) In general. (1... variable annuity contract vary with the insurance company's investment experience with respect to such.... Accordingly, a company issuing variable annuity contracts shall qualify as a life insurance company for...

  12. 26 CFR 1.801-7 - Variable annuities.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) INCOME TAXES (CONTINUED) Life Insurance Companies § 1.801-7 Variable annuities. (a) In general. (1... variable annuity contract vary with the insurance company's investment experience with respect to such.... Accordingly, a company issuing variable annuity contracts shall qualify as a life insurance company for...

  13. Simple and Double Alfven Waves: Hamiltonian Aspects

    NASA Astrophysics Data System (ADS)

    Webb, G. M.; Zank, G. P.; Hu, Q.; le Roux, J. A.; Dasgupta, B.

    2011-12-01

    We discuss the nature of simple and double Alfvén waves. Simple waves depend on a single phase variable \\varphi, but double waves depend on two independent phase variables \\varphi1 and \\varphi2. The phase variables depend on the space and time coordinates x and t. Simple and double Alfvén waves have the same integrals, namely, the entropy, density, magnetic pressure, and group velocity (the sum of the Alfvén and fluid velocities) are constant throughout the flow. We present examples of both simple and double Alfvén waves, and discuss Hamiltonian formulations of the waves.

  14. MULTIVARIATE ANALYSIS OF DRINKING BEHAVIOUR IN A RURAL POPULATION

    PubMed Central

    Mathrubootham, N.; Bashyam, V.S.P.; Shahjahan

    1997-01-01

    This study was carried out to find out the drinking pattern in a rural population, using multivariate techniques. 386 current users identified in a community were assessed with regard to their drinking behaviours using a structured interview. For purposes of the study the questions were condensed into 46 meaningful variables. In bivariate analysis, 14 variables including dependent variables such as dependence, MAST & CAGE (measuring alcoholic status), Q.F. Index and troubled drinking were found to be significant. Taking these variables and other multivariate techniques too such as ANOVA, correlation, regression analysis and factor analysis were done using both SPSS PC + and HCL magnum mainframe computer with FOCUS package and UNIX systems. Results revealed that number of factors such as drinking style, duration of drinking, pattern of abuse, Q.F. Index and various problems influenced drinking and some of them set up a vicious circle. Factor analysis revealed mainly 3 factors, abuse, dependence and social drinking factors. Dependence could be divided into low/moderate dependence. The implications and practical applications of these tests are also discussed. PMID:21584077

  15. Spatial Variability of Snowpack Properties On Small Slopes

    NASA Astrophysics Data System (ADS)

    Pielmeier, C.; Kronholm, K.; Schneebeli, M.; Schweizer, J.

    The spatial variability of alpine snowpacks is created by a variety of parameters like deposition, wind erosion, sublimation, melting, temperature, radiation and metamor- phism of the snow. Spatial variability is thought to strongly control the avalanche initi- ation and failure propagation processes. Local snowpack measurements are currently the basis for avalanche warning services and there exist contradicting hypotheses about the spatial continuity of avalanche active snow layers and interfaces. Very little about the spatial variability of the snowpack is known so far, therefore we have devel- oped a systematic and objective method to measure the spatial variability of snowpack properties, layering and its relation to stability. For a complete coverage, the analysis of the spatial variability has to entail all scales from mm to km. In this study the small to medium scale spatial variability is investigated, i.e. the range from centimeters to tenths of meters. During the winter 2000/2001 we took systematic measurements in lines and grids on a flat snow test field with grid distances from 5 cm to 0.5 m. Fur- thermore, we measured systematic grids with grid distances between 0.5 m and 2 m in undisturbed flat fields and on small slopes above the tree line at the Choerbschhorn, in the region of Davos, Switzerland. On 13 days we measured the spatial pattern of the snowpack stratigraphy with more than 110 snow micro penetrometer measure- ments at slopes and flat fields. Within this measuring grid we placed 1 rutschblock and 12 stuffblock tests to measure the stability of the snowpack. With the large num- ber of measurements we are able to use geostatistical methods to analyse the spatial variability of the snowpack. Typical correlation lengths are calculated from semivari- ograms. Discerning the systematic trends from random spatial variability is analysed using statistical models. Scale dependencies are shown and recurring scaling patterns are outlined. The importance of the small and medium scale spatial variability for the larger (kilometer) scale spatial variability as well as for the avalanche formation are discussed. Finally, an outlook on spatial models for the snowpack variability is given.

  16. 5 CFR 843.405 - Dependency.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Dependency. 843.405 Section 843.405 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-DEATH BENEFITS AND EMPLOYEE REFUNDS Child Annuities § 843.405 Dependency. To be...

  17. 5 CFR 843.405 - Dependency.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Dependency. 843.405 Section 843.405 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-DEATH BENEFITS AND EMPLOYEE REFUNDS Child Annuities § 843.405 Dependency. To be...

  18. 5 CFR 843.405 - Dependency.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Dependency. 843.405 Section 843.405 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-DEATH BENEFITS AND EMPLOYEE REFUNDS Child Annuities § 843.405 Dependency. To be...

  19. 5 CFR 843.405 - Dependency.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Dependency. 843.405 Section 843.405 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-DEATH BENEFITS AND EMPLOYEE REFUNDS Child Annuities § 843.405 Dependency. To be...

  20. 5 CFR 843.405 - Dependency.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Dependency. 843.405 Section 843.405 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-DEATH BENEFITS AND EMPLOYEE REFUNDS Child Annuities § 843.405 Dependency. To be...

  1. On accommodating spatial interactions in a Generalized Heterogeneous Data Model (GHDM) of mixed types of dependent variables.

    DOT National Transportation Integrated Search

    2015-12-01

    We develop an econometric framework for incorporating spatial dependence in integrated model systems of latent variables and multidimensional mixed data outcomes. The framework combines Bhats Generalized Heterogeneous Data Model (GHDM) with a spat...

  2. Symbol-and-Arrow Diagrams in Teaching Pharmacokinetics.

    ERIC Educational Resources Information Center

    Hayton, William L.

    1990-01-01

    Symbol-and-arrow diagrams are helpful adjuncts to equations derived from pharmacokinetic models. Both show relationships among dependent and independent variables. Diagrams show only qualitative relationships, but clearly show which variables are dependent and which are independent, helping students understand complex but important functional…

  3. Variable selection in discrete survival models including heterogeneity.

    PubMed

    Groll, Andreas; Tutz, Gerhard

    2017-04-01

    Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.

  4. Fractional Programming for Communication Systems—Part II: Uplink Scheduling via Matching

    NASA Astrophysics Data System (ADS)

    Shen, Kaiming; Yu, Wei

    2018-05-01

    This two-part paper develops novel methodologies for using fractional programming (FP) techniques to design and optimize communication systems. Part I of this paper proposes a new quadratic transform for FP and treats its application for continuous optimization problems. In this Part II of the paper, we study discrete problems, such as those involving user scheduling, which are considerably more difficult to solve. Unlike the continuous problems, discrete or mixed discrete-continuous problems normally cannot be recast as convex problems. In contrast to the common heuristic of relaxing the discrete variables, this work reformulates the original problem in an FP form amenable to distributed combinatorial optimization. The paper illustrates this methodology by tackling the important and challenging problem of uplink coordinated multi-cell user scheduling in wireless cellular systems. Uplink scheduling is more challenging than downlink scheduling, because uplink user scheduling decisions significantly affect the interference pattern in nearby cells. Further, the discrete scheduling variable needs to be optimized jointly with continuous variables such as transmit power levels and beamformers. The main idea of the proposed FP approach is to decouple the interaction among the interfering links, thereby permitting a distributed and joint optimization of the discrete and continuous variables with provable convergence. The paper shows that the well-known weighted minimum mean-square-error (WMMSE) algorithm can also be derived from a particular use of FP; but our proposed FP-based method significantly outperforms WMMSE when discrete user scheduling variables are involved, both in term of run-time efficiency and optimizing results.

  5. Multipartite entanglement in three-mode Gaussian states of continuous-variable systems: Quantification, sharing structure, and decoherence

    NASA Astrophysics Data System (ADS)

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2006-03-01

    We present a complete analysis of the multipartite entanglement of three-mode Gaussian states of continuous-variable systems. We derive standard forms which characterize the covariance matrix of pure and mixed three-mode Gaussian states up to local unitary operations, showing that the local entropies of pure Gaussian states are bound to fulfill a relationship which is stricter than the general Araki-Lieb inequality. Quantum correlations can be quantified by a proper convex roof extension of the squared logarithmic negativity, the continuous-variable tangle, or contangle. We review and elucidate in detail the proof that in multimode Gaussian states the contangle satisfies a monogamy inequality constraint [G. Adesso and F. Illuminati, New J. Phys8, 15 (2006)]. The residual contangle, emerging from the monogamy inequality, is an entanglement monotone under Gaussian local operations and classical communications and defines a measure of genuine tripartite entanglements. We determine the analytical expression of the residual contangle for arbitrary pure three-mode Gaussian states and study in detail the distribution of quantum correlations in such states. This analysis yields that pure, symmetric states allow for a promiscuous entanglement sharing, having both maximum tripartite entanglement and maximum couplewise entanglement between any pair of modes. We thus name these states GHZ/W states of continuous-variable systems because they are simultaneous continuous-variable counterparts of both the GHZ and the W states of three qubits. We finally consider the effect of decoherence on three-mode Gaussian states, studying the decay of the residual contangle. The GHZ/W states are shown to be maximally robust against losses and thermal noise.

  6. Maximum power point tracking algorithm based on sliding mode and fuzzy logic for photovoltaic sources under variable environmental conditions

    NASA Astrophysics Data System (ADS)

    Atik, L.; Petit, P.; Sawicki, J. P.; Ternifi, Z. T.; Bachir, G.; Della, M.; Aillerie, M.

    2017-02-01

    Solar panels have a nonlinear voltage-current characteristic, with a distinct maximum power point (MPP), which depends on the environmental factors, such as temperature and irradiation. In order to continuously harvest maximum power from the solar panels, they have to operate at their MPP despite the inevitable changes in the environment. Various methods for maximum power point tracking (MPPT) were developed and finally implemented in solar power electronic controllers to increase the efficiency in the electricity production originate from renewables. In this paper we compare using Matlab tools Simulink, two different MPP tracking methods, which are, fuzzy logic control (FL) and sliding mode control (SMC), considering their efficiency in solar energy production.

  7. Update on the therapy of Behçet disease

    PubMed Central

    Saleh, Zeinab

    2014-01-01

    Behçet disease is a chronic inflammatory systemic disorder, characterized by a relapsing and remitting course. It manifests with oral and genital ulcerations, skin lesions, uveitis, and vascular, central nervous system and gastrointestinal involvement. The main histopathological finding is a widespread vasculitis of the arteries and veins of any size. The cause of this disease is presumed to be multifactorial involving infectious triggers, genetic predisposition, and dysregulation of the immune system. As the clinical expression of Behçet disease is heterogeneous, pharmacological therapy is variable and depends largely on the severity of the disease and organ involvement. Treatment of Behçet disease continues to be based largely on anecdotal case reports, case series, and a few randomized clinical trials. PMID:24790727

  8. An improved switching converter model. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters in the continuous mode and discontinuous mode was done by averaging and discrete sampling techniques. A model was developed by combining these two techniques. This model, the discrete average model, accurately predicts the envelope of the output voltage and is easy to implement in circuit and state variable forms. The proposed model is shown to be dependent on the type of duty cycle control. The proper selection of the power stage model, between average and discrete average, is largely a function of the error processor in the feedback loop. The accuracy of the measurement data taken by a conventional technique is affected by the conditions at which the data is collected.

  9. Solution of a cauchy problem for a diffusion equation in a Hilbert space by a Feynman formula

    NASA Astrophysics Data System (ADS)

    Remizov, I. D.

    2012-07-01

    The Cauchy problem for a class of diffusion equations in a Hilbert space is studied. It is proved that the Cauchy problem in well posed in the class of uniform limits of infinitely smooth bounded cylindrical functions on the Hilbert space, and the solution is presented in the form of the so-called Feynman formula, i.e., a limit of multiple integrals against a gaussian measure as the multiplicity tends to infinity. It is also proved that the solution of the Cauchy problem depends continuously on the diffusion coefficient. A process reducing an approximate solution of an infinite-dimensional diffusion equation to finding a multiple integral of a real function of finitely many real variables is indicated.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckdahn, Rainer, E-mail: Rainer.Buckdahn@univ-brest.fr; Li, Juan, E-mail: juanli@sdu.edu.cn; Ma, Jin, E-mail: jinma@usc.edu

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and wemore » extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.« less

  11. Non-stationary dynamics in the bouncing ball: A wavelet perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behera, Abhinna K., E-mail: abhinna@iiserkol.ac.in; Panigrahi, Prasanta K., E-mail: pprasanta@iiserkol.ac.in; Sekar Iyengar, A. N., E-mail: ansekar.iyengar@saha.ac.in

    2014-12-01

    The non-stationary dynamics of a bouncing ball, comprising both periodic as well as chaotic behavior, is studied through wavelet transform. The multi-scale characterization of the time series displays clear signatures of self-similarity, complex scaling behavior, and periodicity. Self-similar behavior is quantified by the generalized Hurst exponent, obtained through both wavelet based multi-fractal detrended fluctuation analysis and Fourier methods. The scale dependent variable window size of the wavelets aptly captures both the transients and non-stationary periodic behavior, including the phase synchronization of different modes. The optimal time-frequency localization of the continuous Morlet wavelet is found to delineate the scales corresponding tomore » neutral turbulence, viscous dissipation regions, and different time varying periodic modulations.« less

  12. A new surface-potential-based compact model for the MoS2 field effect transistors in active matrix display applications

    NASA Astrophysics Data System (ADS)

    Cao, Jingchen; Peng, Songang; Liu, Wei; Wu, Quantan; Li, Ling; Geng, Di; Yang, Guanhua; Ji, Zhouyu; Lu, Nianduan; Liu, Ming

    2018-02-01

    We present a continuous surface-potential-based compact model for molybdenum disulfide (MoS2) field effect transistors based on the multiple trapping release theory and the variable-range hopping theory. We also built contact resistance and velocity saturation models based on the analytical surface potential. This model is verified with experimental data and is able to accurately predict the temperature dependent behavior of the MoS2 field effect transistor. Our compact model is coded in Verilog-A, which can be implemented in a computer-aided design environment. Finally, we carried out an active matrix display simulation, which suggested that the proposed model can be successfully applied to circuit design.

  13. What do we mean by the word “Shock”?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Runnels, Scott Robert

    From one vantage point, a shock is a continuous but drastic change in state variables that occurs over very small time and length scales. These scales and associated changes in state variables can be measured experimentally. From another vantage point, a shock is a mathematical singularity consisting of instantaneous changes in state variables. This more mathematical view gives rise to analytical solutions to idealized problems. And from a third vantage point, a shock is a structure in a hydrocode prediction. Its width depends on the simulation’s grid resolution and artificial viscosity. These three vantage points can be in conflict whenmore » ideas from the associated fields are combined, and yet combining them is an important goal of an integrated modeling program. This presentation explores an example of how models for real materials in the presence of real shocks react to a hydrocode’s numerical shocks of finite width. The presentation will include an introduction to plasticity for the novice, an historical view of plasticity algorithms, a demonstration of how pursuing the meaning of “shock” has resulted in hydrocode improvements, and will conclude by answering some of the questions that arise from that pursuit. After the technical part of the presentation, a few slides advertising LANL’s Computational Physics Student Summer Workshop will be shown.« less

  14. Termites promote resistance of decomposition to spatiotemporal variability in rainfall.

    PubMed

    Veldhuis, Michiel P; Laso, Francisco J; Olff, Han; Berg, Matty P

    2017-02-01

    The ecological impact of rapid environmental change will depend on the resistance of key ecosystems processes, which may be promoted by species that exert strong control over local environmental conditions. Recent theoretical work suggests that macrodetritivores increase the resistance of African savanna ecosystems to changing climatic conditions, but experimental evidence is lacking. We examined the effect of large fungus-growing termites and other non-fungus-growing macrodetritivores on decomposition rates empirically with strong spatiotemporal variability in rainfall and temperature. Non-fungus-growing larger macrodetritivores (earthworms, woodlice, millipedes) promoted decomposition rates relative to microbes and small soil fauna (+34%) but both groups reduced their activities with decreasing rainfall. However, fungus-growing termites increased decomposition rates strongest (+123%) under the most water-limited conditions, making overall decomposition rates mostly independent from rainfall. We conclude that fungus-growing termites are of special importance in decoupling decomposition rates from spatiotemporal variability in rainfall due to the buffered environment they create within their extended phenotype (mounds), that allows decomposition to continue when abiotic conditions outside are less favorable. This points at a wider class of possibly important ecological processes, where soil-plant-animal interactions decouple ecosystem processes from large-scale climatic gradients. This may strongly alter predictions from current climate change models. © 2016 by the Ecological Society of America.

  15. The influence of the pressure force control signal on selected parameters of the vehicle continuously variable transmission

    NASA Astrophysics Data System (ADS)

    Bieniek, A.; Graba, M.; Prażnowski, K.

    2016-09-01

    The paper presents results of research on the effect of frequency control signal on the course selected operating parameters of the continuously variable transmission CVT. The study used a gear Fuji Hyper M6 with electro-hydraulic control system and proprietary software for control and data acquisition developed in LabView environment.

  16. Fuel economy effects and incremental cost, weight and leadtime impacts of employing a continuously variable transmission (CVT) in mid-size passenger cars or compact light trucks

    DOT National Transportation Integrated Search

    1999-06-01

    This report is a paper study of the fuel economy benefits on the Environmental Protection Agency (EPA) City and Highway Cycles of using a continuously variable transmission (CVT) in a 3625 lb (1644 kg) car and compact light truck. The baseline vehicl...

  17. Design study of a continuously variable roller cone traction CVT for electric vehicles

    NASA Technical Reports Server (NTRS)

    Mccoin, D. K.; Walker, R. D.

    1980-01-01

    Continuously variable ratio transmissions (CVT) featuring cone and roller traction elements and computerized controls are studied. The CVT meets or exceeds all requirements set forth in the design criteria. Further, a scalability analysis indicates the basic concept is applicable to lower and higher power units, with upward scaling for increased power being more readily accomplished.

  18. A journal bearing with variable geometry for the suppression of vibrations in rotating shafts: Simulation, design, construction and experiment

    NASA Astrophysics Data System (ADS)

    Chasalevris, Athanasios; Dohnal, Fadi

    2015-02-01

    The idea for a journal bearing with variable geometry was formerly developed and investigated on its principles of operation giving very optimistic theoretical results for the vibration quenching of simple and more complicated rotor bearing systems during the passage through the first critical speed. The journal bearing with variable geometry is presented in this paper in its final form with the detailed design procedure. The current journal bearing was constructed in order to be applied in a simple real rotor bearing system that already exists as an experimental facility. The current paper presents details on the manufactured prototype bearing as an experimental continuation of previous works that presented the simulation of the operating principle of this journal bearing. The design parameters are discussed thoroughly under the numerical simulation for the fluid film pressure in dependency of the variable fluid film thickness during the operation conditions. The implementation of the variable geometry bearing in an experimental rotor bearing system is outlined. Various measurements highlight the efficiency of the proposed bearing element in vibration quenching during the passage through resonance. The inspiration for the current idea is based on the fact that the alteration of the fluid film characteristics of stiffness and damping during the passage through resonance results in vibration quenching. This alteration of the bearing characteristics is achieved by the introduction of an additional fluid film thickness using the passive displacement of the lower half-bearing part. • The contribution of the current journal bearing in vibration quenching. • Experimental evidence for the VGJB contribution.

  19. A land-use regression model for estimating microenvironmental diesel exposure given multiple addresses from birth through childhood.

    PubMed

    Ryan, Patrick H; Lemasters, Grace K; Levin, Linda; Burkle, Jeff; Biswas, Pratim; Hu, Shaohua; Grinshpun, Sergey; Reponen, Tiina

    2008-10-01

    The Cincinnati Childhood Allergy and Air Pollution Study (CCAAPS) is a prospective birth cohort whose purpose is to determine if exposure to high levels of diesel exhaust particles (DEP) during early childhood increases the risk for developing allergic diseases. In order to estimate exposure to DEP, a land-use regression (LUR) model was developed using geographic data as independent variables and sampled levels of a marker of DEP as the dependent variable. A continuous wind direction variable was also created. The LUR model predicted 74% of the variability in sampled values with four variables: wind direction, length of bus routes within 300 m of the sample site, a measure of truck intensity within 300 m of the sampling site, and elevation. The LUR model was subsequently applied to all locations where the child had spent more than eight hours per week from through age three. A time-weighted average (TWA) microenvironmental exposure estimate was derived for four time periods: 0-6 months, 7-12 months, 13-24 months, 25-36 months. By age two, one third of the children were spending significant time at locations other than home and by 36 months, 39% of the children had changed their residential addresses. The mean cumulative DEP exposure estimate increased from age 6 to 36 months from 70 to 414 microg/m3-days. Findings indicate that using birth addresses to estimate a child's exposure may result in exposure misclassification for some children who spend a significant amount of time at a location with high exposure to DEP.

  20. Extremal entanglement and mixedness in continuous variable systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2004-08-01

    We investigate the relationship between mixedness and entanglement for Gaussian states of continuous variable systems. We introduce generalized entropies based on Schatten p norms to quantify the mixedness of a state and derive their explicit expressions in terms of symplectic spectra. We compare the hierarchies of mixedness provided by such measures with the one provided by the purity (defined as tr {rho}{sup 2} for the state {rho}) for generic n-mode states. We then review the analysis proving the existence of both maximally and minimally entangled states at given global and marginal purities, with the entanglement quantified by the logarithmic negativity.more » Based on these results, we extend such an analysis to generalized entropies, introducing and fully characterizing maximally and minimally entangled states for given global and local generalized entropies. We compare the different roles played by the purity and by the generalized p entropies in quantifying the entanglement and the mixedness of continuous variable systems. We introduce the concept of average logarithmic negativity, showing that it allows a reliable quantitative estimate of continuous variable entanglement by direct measurements of global and marginal generalized p entropies.« less

Top