Sample records for linear model repeated

  1. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  2. Linearization improves the repeatability of quantitative dynamic contrast-enhanced MRI.

    PubMed

    Jones, Kyle M; Pagel, Mark D; Cárdenas-Rodríguez, Julio

    2018-04-01

    The purpose of this study was to compare the repeatabilities of the linear and nonlinear Tofts and reference region models (RRM) for dynamic contrast-enhanced MRI (DCE-MRI). Simulated and experimental DCE-MRI data from 12 rats with a flank tumor of C6 glioma acquired over three consecutive days were analyzed using four quantitative and semi-quantitative DCE-MRI metrics. The quantitative methods used were: 1) linear Tofts model (LTM), 2) non-linear Tofts model (NTM), 3) linear RRM (LRRM), and 4) non-linear RRM (NRRM). The following semi-quantitative metrics were used: 1) maximum enhancement ratio (MER), 2) time to peak (TTP), 3) initial area under the curve (iauc64), and 4) slope. LTM and NTM were used to estimate K trans , while LRRM and NRRM were used to estimate K trans relative to muscle (R Ktrans ). Repeatability was assessed by calculating the within-subject coefficient of variation (wSCV) and the percent intra-subject variation (iSV) determined with the Gage R&R analysis. The iSV for R Ktrans using LRRM was two-fold lower compared to NRRM at all simulated and experimental conditions. A similar trend was observed for the Tofts model, where LTM was at least 50% more repeatable than the NTM under all experimental and simulated conditions. The semi-quantitative metrics iauc64 and MER were as equally repeatable as K trans and R Ktrans estimated by LTM and LRRM respectively. The iSV for iauc64 and MER were significantly lower than the iSV for slope and TTP. In simulations and experimental results, linearization improves the repeatability of quantitative DCE-MRI by at least 30%, making it as repeatable as semi-quantitative metrics. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Power of Models in Longitudinal Study: Findings from a Full-Crossed Simulation Design

    ERIC Educational Resources Information Center

    Fang, Hua; Brooks, Gordon P.; Rizzo, Maria L.; Espy, Kimberly Andrews; Barcikowski, Robert S.

    2009-01-01

    Because the power properties of traditional repeated measures and hierarchical multivariate linear models have not been clearly determined in the balanced design for longitudinal studies in the literature, the authors present a power comparison study of traditional repeated measures and hierarchical multivariate linear models under 3…

  4. Genetic parameters for racing records in trotters using linear and generalized linear models.

    PubMed

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  5. [Analysis of binary classification repeated measurement data with GEE and GLMMs using SPSS software].

    PubMed

    An, Shengli; Zhang, Yanhong; Chen, Zheng

    2012-12-01

    To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.

  6. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies

    ERIC Educational Resources Information Center

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-01-01

    Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. Method: We propose a…

  7. An R2 statistic for fixed effects in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  8. Hypothesis testing in functional linear regression models with Neyman's truncation and wavelet thresholding for longitudinal data.

    PubMed

    Yang, Xiaowei; Nie, Kun

    2008-03-15

    Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.

  9. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Treesearch

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  10. [Analysis of variance of repeated data measured by water maze with SPSS].

    PubMed

    Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang

    2007-01-01

    To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (P

  11. Joint modelling of repeated measurement and time-to-event data: an introductory tutorial.

    PubMed

    Asar, Özgür; Ritchie, James; Kalra, Philip A; Diggle, Peter J

    2015-02-01

    The term 'joint modelling' is used in the statistical literature to refer to methods for simultaneously analysing longitudinal measurement outcomes, also called repeated measurement data, and time-to-event outcomes, also called survival data. A typical example from nephrology is a study in which the data from each participant consist of repeated estimated glomerular filtration rate (eGFR) measurements and time to initiation of renal replacement therapy (RRT). Joint models typically combine linear mixed effects models for repeated measurements and Cox models for censored survival outcomes. Our aim in this paper is to present an introductory tutorial on joint modelling methods, with a case study in nephrology. We describe the development of the joint modelling framework and compare the results with those obtained by the more widely used approaches of conducting separate analyses of the repeated measurements and survival times based on a linear mixed effects model and a Cox model, respectively. Our case study concerns a data set from the Chronic Renal Insufficiency Standards Implementation Study (CRISIS). We also provide details of our open-source software implementation to allow others to replicate and/or modify our analysis. The results for the conventional linear mixed effects model and the longitudinal component of the joint models were found to be similar. However, there were considerable differences between the results for the Cox model with time-varying covariate and the time-to-event component of the joint model. For example, the relationship between kidney function as measured by eGFR and the hazard for initiation of RRT was significantly underestimated by the Cox model that treats eGFR as a time-varying covariate, because the Cox model does not take measurement error in eGFR into account. Joint models should be preferred for simultaneous analyses of repeated measurement and survival data, especially when the former is measured with error and the association between the underlying error-free measurement process and the hazard for survival is of scientific interest. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  12. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  13. Multivariate mixed linear model analysis of longitudinal data: an information-rich statistical technique for analyzing disease resistance data

    USDA-ARS?s Scientific Manuscript database

    The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...

  14. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Impact of depressive symptoms, self-esteem and neuroticism on trajectories of overgeneral autobiographical memory over repeated trials.

    PubMed

    Kashdan, Todd B; Roberts, John E; Carlos, Erica L

    2006-04-01

    The present study examined trajectories of change in the frequency of overgeneral autobiographical memory (OGM) over the course of repeated trials, and tested whether particular dimensions of depressive symptomatology (somatic and cognitive-affective distress), self-esteem, and neuroticism account for individual differences in these trajectories. Given that depression is associated with impairments in effortful processing, we predicted that over repeated trials depression would be associated with increasingly OGM. Generalised Linear Mixed Models with Penalised Quasi-Likelihood demonstrated significant linear and quadratic trends in OGM over repeated trials, and somatic distress and self-esteem moderated these slopes. The form of these interactions suggested that somatic distress and low self-esteem primarily contribute to OGM during the second half of the trial sequence. The present findings demonstrate the value of a novel analytical approach to OGM that estimates individual trajectories of change over repeated trials.

  16. The Development of Web-based Graphical User Interface for Unified Modeling Data with Multi (Correlated) Responses

    NASA Astrophysics Data System (ADS)

    Made Tirta, I.; Anggraeni, Dian

    2018-04-01

    Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.

  17. Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties

    PubMed Central

    Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon

    2014-01-01

    Purpose The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency during anterior-posterior stretching. Method Three materially linear and three materially nonlinear models were created and stretched up to 10 mm in 1 mm increments. Phonation onset pressure (Pon) and fundamental frequency (F0) at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1 mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Results Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Conclusions Nonlinear synthetic models appear to more accurately represent the human vocal folds than linear models, especially with respect to F0 response. PMID:22271874

  18. Frequency response of synthetic vocal fold models with linear and nonlinear material properties.

    PubMed

    Shaw, Stephanie M; Thomson, Scott L; Dromey, Christopher; Smith, Simeon

    2012-10-01

    The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F0) during anterior-posterior stretching. Three materially linear and 3 materially nonlinear models were created and stretched up to 10 mm in 1-mm increments. Phonation onset pressure (Pon) and F0 at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1-mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Nonlinear synthetic models appear to more accurately represent the human vocal folds than do linear models, especially with respect to F0 response.

  19. Multivariate and repeated measures (MRM): A new toolbox for dependent and multimodal group-level neuroimaging data

    PubMed Central

    McFarquhar, Martyn; McKie, Shane; Emsley, Richard; Suckling, John; Elliott, Rebecca; Williams, Stephen

    2016-01-01

    Repeated measurements and multimodal data are common in neuroimaging research. Despite this, conventional approaches to group level analysis ignore these repeated measurements in favour of multiple between-subject models using contrasts of interest. This approach has a number of drawbacks as certain designs and comparisons of interest are either not possible or complex to implement. Unfortunately, even when attempting to analyse group level data within a repeated-measures framework, the methods implemented in popular software packages make potentially unrealistic assumptions about the covariance structure across the brain. In this paper, we describe how this issue can be addressed in a simple and efficient manner using the multivariate form of the familiar general linear model (GLM), as implemented in a new MATLAB toolbox. This multivariate framework is discussed, paying particular attention to methods of inference by permutation. Comparisons with existing approaches and software packages for dependent group-level neuroimaging data are made. We also demonstrate how this method is easily adapted for dependency at the group level when multiple modalities of imaging are collected from the same individuals. Follow-up of these multimodal models using linear discriminant functions (LDA) is also discussed, with applications to future studies wishing to integrate multiple scanning techniques into investigating populations of interest. PMID:26921716

  20. An approximate generalized linear model with random effects for informative missing data.

    PubMed

    Follmann, D; Wu, M

    1995-03-01

    This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.

  1. Optimizing working space in laparoscopy: CT measurement of the effect of pre-stretching of the abdominal wall in a porcine model.

    PubMed

    Vlot, John; Wijnen, René; Stolker, Robert Jan; Bax, Klaas N

    2014-03-01

    Determinants of working space in minimal access surgery have not been well studied. Using computed tomography (CT) to measure volumes and linear dimensions, we are studying the effect of a number of determinants of CO2 working space in a porcine laparoscopy model. Here we report the effects of pre-stretching of the abdominal wall. Earlier we had noted an increase in CO2 pneumoperitoneum volume at repeat insufflation with an intra-abdominal pressure (IAP) of 5 mmHg after previous stepwise insufflation up to an IAP of 15 mmHg. We reviewed the data of this serendipity group; data of 16 pigs were available. In a new group of eight pigs, we also explored this effect at repeat IAPs of 10 and 15 mmHg. Volumes and linear dimensions of the CO2 pneumoperitoneum were measured on reconstructed CT images and compared between the initial and repeat insufflation runs. Previous stepwise insufflation of the abdomen with CO2 up to 15 mmHg significantly (p < 0.01) increased subsequent working-space volume at a repeat IAP of 5 mmHg by 21 %, 7 % at a repeat IAP of 10 mmHg and 3 % at a repeat IAP of 15 mmHg. The external anteroposterior diameter significantly (p < 0.01) increased by 0.5 cm (14 %) at repeat 5 mmHg. Other linear dimensions showed a much smaller change. There was no statistically significant correlation between the duration of the insufflation run and the volume increase after pre-stretching at all IAP levels. Pre-stretching of the abdominal wall allows for the same surgical-field exposure at lower IAPs, reducing the negative effects of prolonged high-pressure CO2 pneumoperitoneum on the cardiorespiratory system and microcirculation. Pre-stretching has important scientific consequences in studies addressing ways of increasing working space in that its effect may confound the possible effects of other interventions aimed at increasing working space.

  2. Linear modeling of steady-state behavioral dynamics.

    PubMed Central

    Palya, William L; Walter, Donald; Kessel, Robert; Lucke, Robert

    2002-01-01

    The observed steady-state behavioral dynamics supported by unsignaled periods of reinforcement within repeating 2,000-s trials were modeled with a linear transfer function. These experiments employed improved schedule forms and analytical methods to improve the precision of the measured transfer function, compared to previous work. The refinements include both the use of multiple reinforcement periods that improve spectral coverage and averaging of independently determined transfer functions. A linear analysis was then used to predict behavior observed for three different test schedules. The fidelity of these predictions was determined. PMID:11831782

  3. Regression analysis using dependent Polya trees.

    PubMed

    Schörgendorfer, Angela; Branscum, Adam J

    2013-11-30

    Many commonly used models for linear regression analysis force overly simplistic shape and scale constraints on the residual structure of data. We propose a semiparametric Bayesian model for regression analysis that produces data-driven inference by using a new type of dependent Polya tree prior to model arbitrary residual distributions that are allowed to evolve across increasing levels of an ordinal covariate (e.g., time, in repeated measurement studies). By modeling residual distributions at consecutive covariate levels or time points using separate, but dependent Polya tree priors, distributional information is pooled while allowing for broad pliability to accommodate many types of changing residual distributions. We can use the proposed dependent residual structure in a wide range of regression settings, including fixed-effects and mixed-effects linear and nonlinear models for cross-sectional, prospective, and repeated measurement data. A simulation study illustrates the flexibility of our novel semiparametric regression model to accurately capture evolving residual distributions. In an application to immune development data on immunoglobulin G antibodies in children, our new model outperforms several contemporary semiparametric regression models based on a predictive model selection criterion. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Validation of a 3D CT method for measurement of linear wear of acetabular cups.

    PubMed

    Jedenmalm, Anneli; Nilsson, Fritjof; Noz, Marilyn E; Green, Douglas D; Gedde, Ulf W; Clarke, Ian C; Stark, Andreas; Maguire, Gerald Q; Zeleznik, Michael P; Olivecrona, Henrik

    2011-02-01

    We evaluated the accuracy and repeatability of a 3D method for polyethylene acetabular cup wear measurements using computed tomography (CT). We propose that the method be used for clinical in vivo assessment of wear in acetabular cups. Ultra-high molecular weight polyethylene cups with a titanium mesh molded on the outside were subjected to wear using a hip simulator. Before and after wear, they were (1) imaged with a CT scanner using a phantom model device, (2) measured using a coordinate measurement machine (CMM), and (3) weighed. CMM was used as the reference method for measurement of femoral head penetration into the cup and for comparison with CT, and gravimetric measurements were used as a reference for both CT and CMM. Femoral head penetration and wear vector angle were studied. The head diameters were also measured with both CMM and CT. The repeatability of the method proposed was evaluated with two repeated measurements using different positions of the phantom in the CT scanner. The accuracy of the 3D CT method for evaluation of linear wear was 0.51 mm and the repeatability was 0.39 mm. Repeatability for wear vector angle was 17°. This study of metal-meshed hip-simulated acetabular cups shows that CT has the capacity for reliable measurement of linear wear of acetabular cups at a clinically relevant level of accuracy.

  5. Performance analysis of wideband data and television channels. [space shuttle communications

    NASA Technical Reports Server (NTRS)

    Geist, J. M.

    1975-01-01

    Several aspects are discussed of space shuttle communications, including the return link (shuttle-to-ground) relayed through a satellite repeater (TDRS). The repeater exhibits nonlinear amplification and an amplitude-dependent phase shift. Models were developed for various link configurations, and computer simulation programs based on these models are described. Certain analytical results on system performance were also obtained. For the system parameters assumed, the results indicate approximately 1 db degradation relative to a link employing a linear repeater. While this degradation is dependent upon the repeater, filter bandwidths, and modulation parameters used, the programs can accommodate changes to any of these quantities. Thus the programs can be applied to determine the performance with any given set of parameters, or used as an aid in link design.

  6. Multivariate and repeated measures (MRM): A new toolbox for dependent and multimodal group-level neuroimaging data.

    PubMed

    McFarquhar, Martyn; McKie, Shane; Emsley, Richard; Suckling, John; Elliott, Rebecca; Williams, Stephen

    2016-05-15

    Repeated measurements and multimodal data are common in neuroimaging research. Despite this, conventional approaches to group level analysis ignore these repeated measurements in favour of multiple between-subject models using contrasts of interest. This approach has a number of drawbacks as certain designs and comparisons of interest are either not possible or complex to implement. Unfortunately, even when attempting to analyse group level data within a repeated-measures framework, the methods implemented in popular software packages make potentially unrealistic assumptions about the covariance structure across the brain. In this paper, we describe how this issue can be addressed in a simple and efficient manner using the multivariate form of the familiar general linear model (GLM), as implemented in a new MATLAB toolbox. This multivariate framework is discussed, paying particular attention to methods of inference by permutation. Comparisons with existing approaches and software packages for dependent group-level neuroimaging data are made. We also demonstrate how this method is easily adapted for dependency at the group level when multiple modalities of imaging are collected from the same individuals. Follow-up of these multimodal models using linear discriminant functions (LDA) is also discussed, with applications to future studies wishing to integrate multiple scanning techniques into investigating populations of interest. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. In vivo measurement of spinal column viscoelasticity--an animal model.

    PubMed

    Hult, E; Ekström, L; Kaigle, A; Holm, S; Hansson, T

    1995-01-01

    The goal of this study was to measure the in vivo viscoelastic response of spinal motion segments loaded in compression using a porcine model. Nine pigs were used in the study. The animals were anaesthetized and, using surgical techniques, four intrapedicular screws were inserted into the vertebrae of the L2-L3 motion segment. A miniaturized servohydraulic exciter capable of compressing the motion segment was mounted on to the screws. In six animals, a loading scheme consisting of 50 N and 100 N of compression, each applied for 10 min, was used. Each loading period was followed by 10 min restitution with zero load. The loading scheme was repeated four times. Three animals were examined for stiffening effects by consecutively repeating eight times 50 N loading for 5 min followed by 5 min restitution with zero load. This loading scheme was repeated using a 100 N load level. The creep-recovery behavior of the motion segment was recorded continuously. Using non-linear regression techniques, the experimental data were used for evaluating the parameters of a three-parameter standard linear solid model. Correlation coefficients of the order of 0.85 or higher were obtained for the three independent parameters of the model. A survey of the data shows that the viscous deformation rate was a function of the load level. Also, repeated loading at 100 N seemed to induce long-lasting changes in the viscoelastic properties of the porcine lumbar motion segment.

  8. POWERLIB: SAS/IML Software for Computing Power in Multivariate Linear Models

    PubMed Central

    Johnson, Jacqueline L.; Muller, Keith E.; Slaughter, James C.; Gurka, Matthew J.; Gribbin, Matthew J.; Simpson, Sean L.

    2014-01-01

    The POWERLIB SAS/IML software provides convenient power calculations for a wide range of multivariate linear models with Gaussian errors. The software includes the Box, Geisser-Greenhouse, Huynh-Feldt, and uncorrected tests in the “univariate” approach to repeated measures (UNIREP), the Hotelling Lawley Trace, Pillai-Bartlett Trace, and Wilks Lambda tests in “multivariate” approach (MULTIREP), as well as a limited but useful range of mixed models. The familiar univariate linear model with Gaussian errors is an important special case. For estimated covariance, the software provides confidence limits for the resulting estimated power. All power and confidence limits values can be output to a SAS dataset, which can be used to easily produce plots and tables for manuscripts. PMID:25400516

  9. Validation of a 3D CT method for measurement of linear wear of acetabular cups

    PubMed Central

    2011-01-01

    Background We evaluated the accuracy and repeatability of a 3D method for polyethylene acetabular cup wear measurements using computed tomography (CT). We propose that the method be used for clinical in vivo assessment of wear in acetabular cups. Material and methods Ultra-high molecular weight polyethylene cups with a titanium mesh molded on the outside were subjected to wear using a hip simulator. Before and after wear, they were (1) imaged with a CT scanner using a phantom model device, (2) measured using a coordinate measurement machine (CMM), and (3) weighed. CMM was used as the reference method for measurement of femoral head penetration into the cup and for comparison with CT, and gravimetric measurements were used as a reference for both CT and CMM. Femoral head penetration and wear vector angle were studied. The head diameters were also measured with both CMM and CT. The repeatability of the method proposed was evaluated with two repeated measurements using different positions of the phantom in the CT scanner. Results The accuracy of the 3D CT method for evaluation of linear wear was 0.51 mm and the repeatability was 0.39 mm. Repeatability for wear vector angle was 17°. Interpretation This study of metal-meshed hip-simulated acetabular cups shows that CT has the capacity for reliable measurement of linear wear of acetabular cups at a clinically relevant level of accuracy. PMID:21281259

  10. On Latent Change Model Choice in Longitudinal Studies

    ERIC Educational Resources Information Center

    Raykov, Tenko; Zajacova, Anna

    2012-01-01

    An interval estimation procedure for proportion of explained observed variance in latent curve analysis is discussed, which can be used as an aid in the process of choosing between linear and nonlinear models. The method allows obtaining confidence intervals for the R[squared] indexes associated with repeatedly followed measures in longitudinal…

  11. Modelling female fertility traits in beef cattle using linear and non-linear models.

    PubMed

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  < 0.08 and r < 0.13, for linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  12. Alternative Models for Small Samples in Psychological Research: Applying Linear Mixed Effects Models and Generalized Estimating Equations to Repeated Measures Data

    ERIC Educational Resources Information Center

    Muth, Chelsea; Bales, Karen L.; Hinde, Katie; Maninger, Nicole; Mendoza, Sally P.; Ferrer, Emilio

    2016-01-01

    Unavoidable sample size issues beset psychological research that involves scarce populations or costly laboratory procedures. When incorporating longitudinal designs these samples are further reduced by traditional modeling techniques, which perform listwise deletion for any instance of missing data. Moreover, these techniques are limited in their…

  13. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    PubMed

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  14. A 45-Second Self-Test for Cardiorespiratory Fitness: Heart Rate-Based Estimation in Healthy Individuals

    PubMed Central

    Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A.; Bonomi, Alberto G.; Moore, Jonathan P.; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter

    2016-01-01

    Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included. PMID:27959935

  15. A 45-Second Self-Test for Cardiorespiratory Fitness: Heart Rate-Based Estimation in Healthy Individuals.

    PubMed

    Sartor, Francesco; Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A; Bonomi, Alberto G; Moore, Jonathan P; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter

    2016-01-01

    Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included.

  16. The effectiveness of repeat lumbar transforaminal epidural steroid injections.

    PubMed

    Murthy, Naveen S; Geske, Jennifer R; Shelerud, Randy A; Wald, John T; Diehn, Felix E; Thielen, Kent R; Kaufmann, Timothy J; Morris, Jonathan M; Lehman, Vance T; Amrami, Kimberly K; Carter, Rickey E; Maus, Timothy P

    2014-10-01

    The aim of this study was to determine 1) if repeat lumbar transforaminal epidural steroid injections (TFESIs) resulted in recovery of pain relief, which has waned since an index injection, and 2) if cumulative benefit could be achieved by repeat injections within 3 months of the index injection. Retrospective observational study with statistical modeling of the response to repeat TFESI. Academic radiology practice. Two thousand eighty-seven single-level TFESIs were performed for radicular pain on 933 subjects. Subjects received repeat TFESIs >2 weeks and <1 year from the index injection. Hierarchical linear modeling was performed to evaluate changes in continuous and categorical pain relief outcomes after repeat TFESI. Subgroup analyses were performed on patients with <3 months duration of pain (acute pain), patients receiving repeat injections within 3 months (clustered injections), and in patients with both acute pain and clustered injections. Repeat TFESIs achieved pain relief in both continuous and categorical outcomes. Relative to the index injection, there was a minimal but statistically significant decrease in pain relief in modeled continuous outcome measures with subsequent injections. Acute pain patients recovered all prior benefit with a statistically significant cumulative benefit. Patients receiving clustered injections achieved statistically significant cumulative benefit, of greater magnitude in acute pain patients. Repeat TFESI may be performed for recurrence of radicular pain with the expectation of recovery of most or all previously achieved benefit; acute pain patients will likely recover all prior benefit. Repeat TFESIs within 3 months of the index injection can provide cumulative benefit. Wiley Periodicals, Inc.

  17. Linear spline multilevel models for summarising childhood growth trajectories: A guide to their application using examples from five birth cohorts.

    PubMed

    Howe, Laura D; Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S; Barros, Aluísio Jd; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A

    2016-10-01

    Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. © The Author(s) 2013.

  18. Linear spline multilevel models for summarising childhood growth trajectories: A guide to their application using examples from five birth cohorts

    PubMed Central

    Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S.; Barros, Aluísio JD; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A

    2013-01-01

    Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. PMID:24108269

  19. Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Cudeck, Robert

    2009-01-01

    A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…

  20. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  1. Algorithms and Complexity Results for Genome Mapping Problems.

    PubMed

    Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric

    2017-01-01

    Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.

  2. Modeling workplace bullying using catastrophe theory.

    PubMed

    Escartin, J; Ceja, L; Navarro, J; Zapf, D

    2013-10-01

    Workplace bullying is defined as negative behaviors directed at organizational members or their work context that occur regularly and repeatedly over a period of time. Employees' perceptions of psychosocial safety climate, workplace bullying victimization, and workplace bullying perpetration were assessed within a sample of nearly 5,000 workers. Linear and nonlinear approaches were applied in order to model both continuous and sudden changes in workplace bullying. More specifically, the present study examines whether a nonlinear dynamical systems model (i.e., a cusp catastrophe model) is superior to the linear combination of variables for predicting the effect of psychosocial safety climate and workplace bullying victimization on workplace bullying perpetration. According to the AICc, and BIC indices, the linear regression model fits the data better than the cusp catastrophe model. The study concludes that some phenomena, especially unhealthy behaviors at work (like workplace bullying), may be better studied using linear approaches as opposed to nonlinear dynamical systems models. This can be explained through the healthy variability hypothesis, which argues that positive organizational behavior is likely to present nonlinear behavior, while a decrease in such variability may indicate the occurrence of negative behaviors at work.

  3. Repeatability of circadian behavioural variation revealed in free-ranging marine fish.

    PubMed

    Alós, Josep; Martorell-Barceló, Martina; Campos-Candela, Andrea

    2017-02-01

    Repeatable between-individual differences in the behavioural manifestation of underlying circadian rhythms determine chronotypes in humans and terrestrial animals. Here, we have repeatedly measured three circadian behaviours, awakening time, rest onset and rest duration, in the free-ranging pearly razorfish, Xyrithchys novacula , facilitated by acoustic tracking technology and hidden Markov models. In addition, daily travelled distance, a standard measure of daily activity as fish personality trait, was repeatedly assessed using a State-Space Model. We have decomposed the variance of these four behavioural traits using linear mixed models and estimated repeatability scores ( R ) while controlling for environmental co-variates: year of experimentation, spatial location of the activity, fish size and gender and their interactions. Between- and within-individual variance decomposition revealed significant R s in all traits suggesting high predictability of individual circadian behavioural variation and the existence of chronotypes. The decomposition of the correlations among chronotypes and the personality trait studied here into between- and within-individual correlations did not reveal any significant correlation at between-individual level. We therefore propose circadian behavioural variation as an independent axis of the fish personality, and the study of chronotypes and their consequences as a novel dimension in understanding within-species fish behavioural diversity.

  4. Passenger comfort during terminal-area flight maneuvers. M.S. Thesis.

    NASA Technical Reports Server (NTRS)

    Schoonover, W. E., Jr.

    1976-01-01

    A series of flight experiments was conducted to obtain passenger subjective responses to closely controlled and repeatable flight maneuvers. In 8 test flights, reactions were obtained from 30 passenger subjects to a wide range of terminal-area maneuvers, including descents, turns, decelerations, and combinations thereof. Analysis of the passenger rating variance indicated that the objective of a repeatable flight passenger environment was achieved. Multiple linear regression models developed from the test data were used to define maneuver motion boundaries for specified degrees of passenger acceptance.

  5. Linear Mixed Models: Gum and Beyond

    NASA Astrophysics Data System (ADS)

    Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens

    2014-04-01

    In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.

  6. FRB 121102: A Repeatedly Combed Neutron Star by a Nearby Low-luminosity Accreting Supermassive Black Hole

    NASA Astrophysics Data System (ADS)

    Zhang, Bing

    2018-02-01

    The origin of fast radio bursts (FRBs) remains mysterious. Recently, the only repeating FRB source, FRB 121102, was reported to possess an extremely large and variable rotation measure (RM). The inferred magnetic field strength in the burst environment is comparable to that in the vicinity of the supermassive black hole Sagittarius A* of our Galaxy. Here, we show that all of the observational properties of FRB 121102 (including the high RM and its evolution, the high linear polarization degree, an invariant polarization angle across each burst and other properties previously known) can be interpreted within the “cosmic comb” model, which invokes a neutron star with typical spin and magnetic field parameters whose magnetosphere is repeatedly and marginally combed by a variable outflow from a nearby low-luminosity accreting supermassive black hole in the host galaxy. We propose three falsifiable predictions (periodic “on/off” states, and periodic/correlated variation of RM and polarization angle) of the model and discuss other FRBs within the context of the cosmic comb model as well as the challenges encountered by other repeating FRB models in light of the new observations.

  7. Study on stress-strain response of multi-phase TRIP steel under cyclic loading

    NASA Astrophysics Data System (ADS)

    Dan, W. J.; Hu, Z. G.; Zhang, W. G.; Li, S. H.; Lin, Z. Q.

    2013-12-01

    The stress-strain response of multi-phase TRIP590 sheet steel is studied in cyclic loading condition at room temperature based on a cyclic phase transformation model and a multi-phase mixed kinematic hardening model. The cyclic martensite transformation model is proposed based on the shear-band intersection, where the repeat number, strain amplitude and cyclic frequency are used to control the phase transformation process. The multi-phase mixed kinematic hardening model is developed based on the non-linear kinematic hardening rule of per-phase. The parameters of transformation model are identified with the relationship between the austenite volume fraction and the repeat number. The parameters in Kinematic hardening model are confirmed by the experimental hysteresis loops in different strain amplitude conditions. The responses of hysteresis loop and stress amplitude are evaluated by tension-compression data.

  8. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    PubMed

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  <  0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  <  0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh-worn accelerometers and may be viable alternative modeling techniques for EE prediction for hip- or thigh-worn accelerometers.

  9. An Evaluation of Nutrition Education Program for Low-Income Youth

    ERIC Educational Resources Information Center

    Kemirembe, Olive M. K.; Radhakrishna, Rama B.; Gurgevich, Elise; Yoder, Edgar P.; Ingram, Patreese D.

    2011-01-01

    A quasi-experimental design consisting of pretest, posttest, and delayed posttest comparison control group was used. Nutrition knowledge and behaviors were measured at pretest (time 1) posttest (time 2) and delayed posttest (time 3). General Linear Model (GLM) repeated measure ANCOVA results showed that youth who received nutrition education…

  10. A Sub-Millimetric 3-DOF Force Sensing Instrument with Integrated Fiber Bragg Grating for Retinal Microsurgery

    PubMed Central

    He, Xingchi; Handa, James; Gehlbach, Peter; Taylor, Russell; Iordachita, Iulian

    2013-01-01

    Vitreoretinal surgery requires very fine motor control to perform precise manipulation of the delicate tissue in the interior of the eye. Besides physiological hand tremor, fatigue, poor kinesthetic feedback, and patient movement, the absence of force sensing is one of the main technical challenges. Previous two degrees of freedom (DOF) force sensing instruments have demonstrated robust force measuring performance. The main design challenge is to incorporate high sensitivity axial force sensing. This paper reports the development of a sub-millimetric 3-DOF force sensing pick instrument based on fiber Bragg grating (FBG) sensors. The configuration of the four FBG sensors is arranged to maximize the decoupling between axial and transverse force sensing. A super-elastic nitinol flexure is designed to achieve high axial force sensitivity. An automated calibration system was developed for repeatability testing, calibration, and validation. Experimental results demonstrate a FBG sensor repeatability of 1.3 pm. The linear model for calculating the transverse forces provides an accurate global estimate. While the linear model for axial force is only locally accurate within a conical region with a 30° vertex angle, a second-order polynomial model can provide a useful global estimate for axial force. Combining the linear model for transverse forces and nonlinear model for axial force, the 3-DOF force sensing instrument can provide sub-millinewton resolution for axial force and a quarter millinewton for transverse forces. Validation with random samples show the force sensor can provide consistent and accurate measurement of three dimensional forces. PMID:24108455

  11. Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.

    PubMed

    Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K

    2011-01-01

    We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.

  12. Genetic parameters for first lactation test-day milk flow in Holstein cows.

    PubMed

    Laureano, M M M; Bignardi, A B; El Faro, L; Cardoso, V L; Albuquerque, L G

    2012-01-01

    Genetic parameters for test-day milk flow (TDMF) of 2175 first lactations of Holstein cows were estimated using multiple-trait and repeatability models. The models included the direct additive genetic effect as a random effect and contemporary group (defined as the year and month of test) and age of cow at calving (linear and quadratic effect) as fixed effects. For the repeatability model, in addition to the effects cited, the permanent environmental effect of the animal was also included as a random effect. Variance components were estimated using the restricted maximum likelihood method in single- and multiple-trait and repeatability analyses. The heritability estimates for TDMF ranged from 0.23 (TDMF 6) to 0.32 (TDMF 2 and TDMF 4) in single-trait analysis and from 0.28 (TDMF 7 and TDMF 10) to 0.37 (TDMF 4) in multiple-trait analysis. In general, higher heritabilities were observed at the beginning of lactation until the fourth month. Heritability estimated with the repeatability model was 0.27 and the coefficient of repeatability for first lactation TDMF was 0.66. The genetic correlations were positive and ranged from 0.72 (TDMF 1 and 10) to 0.97 (TDMF 4 and 5). The results indicate that milk flow should respond satisfactorily to selection, promoting rapid genetic gains because the estimated heritabilities were moderate to high. Higher genetic gains might be obtained if selection was performed in the TDMF 4. Both the repeatability model and the multiple-trait model are adequate for the genetic evaluation of animals in terms of milk flow, but the latter provides more accurate estimates of breeding values.

  13. Comparing a single case to a control group - Applying linear mixed effects models to repeated measures data.

    PubMed

    Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus

    2015-10-01

    In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Right-Sizing Statistical Models for Longitudinal Data

    PubMed Central

    Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.

    2015-01-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507

  15. Right-sizing statistical models for longitudinal data.

    PubMed

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  16. CAG repeat expansion in Huntington disease determines age at onset in a fully dominant fashion

    PubMed Central

    Lee, J.-M.; Ramos, E.M.; Lee, J.-H.; Gillis, T.; Mysore, J.S.; Hayden, M.R.; Warby, S.C.; Morrison, P.; Nance, M.; Ross, C.A.; Margolis, R.L.; Squitieri, F.; Orobello, S.; Di Donato, S.; Gomez-Tortosa, E.; Ayuso, C.; Suchowersky, O.; Trent, R.J.A.; McCusker, E.; Novelletto, A.; Frontali, M.; Jones, R.; Ashizawa, T.; Frank, S.; Saint-Hilaire, M.H.; Hersch, S.M.; Rosas, H.D.; Lucente, D.; Harrison, M.B.; Zanko, A.; Abramson, R.K.; Marder, K.; Sequeiros, J.; Paulsen, J.S.; Landwehrmeyer, G.B.; Myers, R.H.; MacDonald, M.E.; Durr, Alexandra; Rosenblatt, Adam; Frati, Luigi; Perlman, Susan; Conneally, Patrick M.; Klimek, Mary Lou; Diggin, Melissa; Hadzi, Tiffany; Duckett, Ayana; Ahmed, Anwar; Allen, Paul; Ames, David; Anderson, Christine; Anderson, Karla; Anderson, Karen; Andrews, Thomasin; Ashburner, John; Axelson, Eric; Aylward, Elizabeth; Barker, Roger A.; Barth, Katrin; Barton, Stacey; Baynes, Kathleen; Bea, Alexandra; Beall, Erik; Beg, Mirza Faisal; Beglinger, Leigh J.; Biglan, Kevin; Bjork, Kristine; Blanchard, Steve; Bockholt, Jeremy; Bommu, Sudharshan Reddy; Brossman, Bradley; Burrows, Maggie; Calhoun, Vince; Carlozzi, Noelle; Chesire, Amy; Chiu, Edmond; Chua, Phyllis; Connell, R.J.; Connor, Carmela; Corey-Bloom, Jody; Craufurd, David; Cross, Stephen; Cysique, Lucette; Santos, Rachelle Dar; Davis, Jennifer; Decolongon, Joji; DiPietro, Anna; Doucette, Nicholas; Downing, Nancy; Dudler, Ann; Dunn, Steve; Ecker, Daniel; Epping, Eric A.; Erickson, Diane; Erwin, Cheryl; Evans, Ken; Factor, Stewart A.; Farias, Sarah; Fatas, Marta; Fiedorowicz, Jess; Fullam, Ruth; Furtado, Sarah; Garde, Monica Bascunana; Gehl, Carissa; Geschwind, Michael D.; Goh, Anita; Gooblar, Jon; Goodman, Anna; Griffith, Jane; Groves, Mark; Guttman, Mark; Hamilton, Joanne; Harrington, Deborah; Harris, Greg; Heaton, Robert K.; Helmer, Karl; Henneberry, Machelle; Hershey, Tamara; Herwig, Kelly; Howard, Elizabeth; Hunter, Christine; Jankovic, Joseph; Johnson, Hans; Johnson, Arik; Jones, Kathy; Juhl, Andrew; Kim, Eun Young; Kimble, Mycah; King, Pamela; Klimek, Mary Lou; Klöppel, Stefan; Koenig, Katherine; Komiti, Angela; Kumar, Rajeev; Langbehn, Douglas; Leavitt, Blair; Leserman, Anne; Lim, Kelvin; Lipe, Hillary; Lowe, Mark; Magnotta, Vincent A.; Mallonee, William M.; Mans, Nicole; Marietta, Jacquie; Marshall, Frederick; Martin, Wayne; Mason, Sarah; Matheson, Kirsty; Matson, Wayne; Mazzoni, Pietro; McDowell, William; Miedzybrodzka, Zosia; Miller, Michael; Mills, James; Miracle, Dawn; Montross, Kelsey; Moore, David; Mori, Sasumu; Moser, David J.; Moskowitz, Carol; Newman, Emily; Nopoulos, Peg; Novak, Marianne; O'Rourke, Justin; Oakes, David; Ondo, William; Orth, Michael; Panegyres, Peter; Pease, Karen; Perlman, Susan; Perlmutter, Joel; Peterson, Asa; Phillips, Michael; Pierson, Ron; Potkin, Steve; Preston, Joy; Quaid, Kimberly; Radtke, Dawn; Rae, Daniela; Rao, Stephen; Raymond, Lynn; Reading, Sarah; Ready, Rebecca; Reece, Christine; Reilmann, Ralf; Reynolds, Norm; Richardson, Kylie; Rickards, Hugh; Ro, Eunyoe; Robinson, Robert; Rodnitzky, Robert; Rogers, Ben; Rosenblatt, Adam; Rosser, Elisabeth; Rosser, Anne; Price, Kathy; Price, Kathy; Ryan, Pat; Salmon, David; Samii, Ali; Schumacher, Jamy; Schumacher, Jessica; Sendon, Jose Luis Lópenz; Shear, Paula; Sheinberg, Alanna; Shpritz, Barnett; Siedlecki, Karen; Simpson, Sheila A.; Singer, Adam; Smith, Jim; Smith, Megan; Smith, Glenn; Snyder, Pete; Song, Allen; Sran, Satwinder; Stephan, Klaas; Stober, Janice; Sü?muth, Sigurd; Suter, Greg; Tabrizi, Sarah; Tempkin, Terry; Testa, Claudia; Thompson, Sean; Thomsen, Teri; Thumma, Kelli; Toga, Arthur; Trautmann, Sonja; Tremont, Geoff; Turner, Jessica; Uc, Ergun; Vaccarino, Anthony; van Duijn, Eric; Van Walsem, Marleen; Vik, Stacie; Vonsattel, Jean Paul; Vuletich, Elizabeth; Warner, Tom; Wasserman, Paula; Wassink, Thomas; Waterman, Elijah; Weaver, Kurt; Weir, David; Welsh, Claire; Werling-Witkoske, Chris; Wesson, Melissa; Westervelt, Holly; Weydt, Patrick; Wheelock, Vicki; Williams, Kent; Williams, Janet; Wodarski, Mary; Wojcieszek, Joanne; Wood, Jessica; Wood-Siverio, Cathy; Wu, Shuhua; Yastrubetskaya, Olga; de Yebenes, Justo Garcia; Zhao, Yong Qiang; Zimbelman, Janice; Zschiegner, Roland; Aaserud, Olaf; Abbruzzese, Giovanni; Andrews, Thomasin; Andrich, Jurgin; Antczak, Jakub; Arran, Natalie; Artiga, Maria J. Saiz; Bachoud-Lévi, Anne-Catherine; Banaszkiewicz, Krysztof; di Poggio, Monica Bandettini; Bandmann, Oliver; Barbera, Miguel A.; Barker, Roger A.; Barrero, Francisco; Barth, Katrin; Bas, Jordi; Beister, Antoine; Bentivoglio, Anna Rita; Bertini, Elisabetta; Biunno, Ida; Bjørgo, Kathrine; Bjørnevoll, Inga; Bohlen, Stefan; Bonelli, Raphael M.; Bos, Reineke; Bourne, Colin; Bradbury, Alyson; Brockie, Peter; Brown, Felicity; Bruno, Stefania; Bryl, Anna; Buck, Andrea; Burg, Sabrina; Burgunder, Jean-Marc; Burns, Peter; Burrows, Liz; Busquets, Nuria; Busse, Monica; Calopa, Matilde; Carruesco, Gemma T.; Casado, Ana Gonzalez; Catena, Judit López; Chu, Carol; Ciesielska, Anna; Clapton, Jackie; Clayton, Carole; Clenaghan, Catherine; Coelho, Miguel; Connemann, Julia; Craufurd, David; Crooks, Jenny; Cubillo, Patricia Trigo; Cubo, Esther; Curtis, Adrienne; De Michele, Giuseppe; De Nicola, A.; de Souza, Jenny; de Weert, A. Marit; de Yébenes, Justo Garcia; Dekker, M.; Descals, A. Martínez; Di Maio, Luigi; Di Pietro, Anna; Dipple, Heather; Dose, Matthias; Dumas, Eve M.; Dunnett, Stephen; Ecker, Daniel; Elifani, F.; Ellison-Rose, Lynda; Elorza, Marina D.; Eschenbach, Carolin; Evans, Carole; Fairtlough, Helen; Fannemel, Madelein; Fasano, Alfonso; Fenollar, Maria; Ferrandes, Giovanna; Ferreira, Jaoquim J.; Fillingham, Kay; Finisterra, Ana Maria; Fisher, K.; Fletcher, Amy; Foster, Jillian; Foustanos, Isabella; Frech, Fernando A.; Fullam, Robert; Fullham, Ruth; Gago, Miguel; García, RocioGarcía-Ramos; García, Socorro S.; Garrett, Carolina; Gellera, Cinzia; Gill, Paul; Ginestroni, Andrea; Golding, Charlotte; Goodman, Anna; Gørvell, Per; Grant, Janet; Griguoli, A.; Gross, Diana; Guedes, Leonor; BascuñanaGuerra, Monica; Guerra, Maria Rosalia; Guerrero, Rosa; Guia, Dolores B.; Guidubaldi, Arianna; Hallam, Caroline; Hamer, Stephanie; Hammer, Kathrin; Handley, Olivia J.; Harding, Alison; Hasholt, Lis; Hedge, Reikha; Heiberg, Arvid; Heinicke, Walburgis; Held, Christine; Hernanz, Laura Casas; Herranhof, Briggitte; Herrera, Carmen Durán; Hidding, Ute; Hiivola, Heli; Hill, Susan; Hjermind, Lena. E.; Hobson, Emma; Hoffmann, Rainer; Holl, Anna Hödl; Howard, Liz; Hunt, Sarah; Huson, Susan; Ialongo, Tamara; Idiago, Jesus Miguel R.; Illmann, Torsten; Jachinska, Katarzyna; Jacopini, Gioia; Jakobsen, Oda; Jamieson, Stuart; Jamrozik, Zygmunt; Janik, Piotr; Johns, Nicola; Jones, Lesley; Jones, Una; Jurgens, Caroline K.; Kaelin, Alain; Kalbarczyk, Anna; Kershaw, Ann; Khalil, Hanan; Kieni, Janina; Klimberg, Aneta; Koivisto, Susana P.; Koppers, Kerstin; Kosinski, Christoph Michael; Krawczyk, Malgorzata; Kremer, Berry; Krysa, Wioletta; Kwiecinski, Hubert; Lahiri, Nayana; Lambeck, Johann; Lange, Herwig; Laver, Fiona; Leenders, K.L.; Levey, Jamie; Leythaeuser, Gabriele; Lezius, Franziska; Llesoy, Joan Roig; Löhle, Matthias; López, Cristobal Diez-Aja; Lorenza, Fortuna; Loria, Giovanna; Magnet, Markus; Mandich, Paola; Marchese, Roberta; Marcinkowski, Jerzy; Mariotti, Caterina; Mariscal, Natividad; Markova, Ivana; Marquard, Ralf; Martikainen, Kirsti; Martínez, Isabel Haro; Martínez-Descals, Asuncion; Martino, T.; Mason, Sarah; McKenzie, Sue; Mechi, Claudia; Mendes, Tiago; Mestre, Tiago; Middleton, Julia; Milkereit, Eva; Miller, Joanne; Miller, Julie; Minster, Sara; Möller, Jens Carsten; Monza, Daniela; Morales, Blas; Moreau, Laura V.; Moreno, Jose L. López-Sendón; Münchau, Alexander; Murch, Ann; Nielsen, Jørgen E.; Niess, Anke; Nørremølle, Anne; Novak, Marianne; O'Donovan, Kristy; Orth, Michael; Otti, Daniela; Owen, Michael; Padieu, Helene; Paganini, Marco; Painold, Annamaria; Päivärinta, Markku; Partington-Jones, Lucy; Paterski, Laurent; Paterson, Nicole; Patino, Dawn; Patton, Michael; Peinemann, Alexander; Peppa, Nadia; Perea, Maria Fuensanta Noguera; Peterson, Maria; Piacentini, Silvia; Piano, Carla; Càrdenas, Regina Pons i; Prehn, Christian; Price, Kathleen; Probst, Daniela; Quarrell, Oliver; Quiroga, Purificacion Pin; Raab, Tina; Rakowicz, Maryla; Raman, Ashok; Raymond, Lucy; Reilmann, Ralf; Reinante, Gema; Reisinger, Karin; Retterstol, Lars; Ribaï, Pascale; Riballo, Antonio V.; Ribas, Guillermo G.; Richter, Sven; Rickards, Hugh; Rinaldi, Carlo; Rissling, Ida; Ritchie, Stuart; Rivera, Susana Vázquez; Robert, Misericordia Floriach; Roca, Elvira; Romano, Silvia; Romoli, Anna Maria; Roos, Raymond A.C.; Røren, Niini; Rose, Sarah; Rosser, Elisabeth; Rosser, Anne; Rossi, Fabiana; Rothery, Jean; Rudzinska, Monika; Ruíz, Pedro J. García; Ruíz, Belan Garzon; Russo, Cinzia Valeria; Ryglewicz, Danuta; Saft, Carston; Salvatore, Elena; Sánchez, Vicenta; Sando, Sigrid Botne; Šašinková, Pavla; Sass, Christian; Scheibl, Monika; Schiefer, Johannes; Schlangen, Christiane; Schmidt, Simone; Schöggl, Helmut; Schrenk, Caroline; Schüpbach, Michael; Schuierer, Michele; Sebastián, Ana Rojo; Selimbegovic-Turkovic, Amina; Sempolowicz, Justyna; Silva, Mark; Sitek, Emilia; Slawek, Jaroslaw; Snowden, Julie; Soleti, Francesco; Soliveri, Paola; Sollom, Andrea; Soltan, Witold; Sorbi, Sandro; Sorensen, Sven Asger; Spadaro, Maria; Städtler, Michael; Stamm, Christiane; Steiner, Tanja; Stokholm, Jette; Stokke, Bodil; Stopford, Cheryl; Storch, Alexander; Straßburger, Katrin; Stubbe, Lars; Sulek, Anna; Szczudlik, Andrzej; Tabrizi, Sarah; Taylor, Rachel; Terol, Santiago Duran-Sindreu; Thomas, Gareth; Thompson, Jennifer; Thomson, Aileen; Tidswell, Katherine; Torres, Maria M. Antequera; Toscano, Jean; Townhill, Jenny; Trautmann, Sonja; Tucci, Tecla; Tuuha, Katri; Uhrova, Tereza; Valadas, Anabela; van Hout, Monique S.E.; van Oostrom, J.C.H.; van Vugt, Jeroen P.P.; vanm, Walsem Marleen R.; Vandenberghe, Wim; Verellen-Dumoulin, Christine; Vergara, Mar Ruiz; Verstappen, C.C.P.; Verstraelen, Nichola; Viladrich, Celia Mareca; Villanueva, Clara; Wahlström, Jan; Warner, Thomas; Wehus, Raghild; Weindl, Adolf; Werner, Cornelius J.; Westmoreland, Leann; Weydt, Patrick; Wiedemann, Alexandra; Wild, Edward; Wild, Sue; Witjes-Ané, Marie-Noelle; Witkowski, Grzegorz; Wójcik, Magdalena; Wolz, Martin; Wolz, Annett; Wright, Jan; Yardumian, Pam; Yates, Shona; Yudina, Elizaveta; Zaremba, Jacek; Zaugg, Sabine W.; Zdzienicka, Elzbieta; Zielonka, Daniel; Zielonka, Euginiusz; Zinzi, Paola; Zittel, Simone; Zucker, Birgrit; Adams, John; Agarwal, Pinky; Antonijevic, Irina; Beck, Christopher; Chiu, Edmond; Churchyard, Andrew; Colcher, Amy; Corey-Bloom, Jody; Dorsey, Ray; Drazinic, Carolyn; Dubinsky, Richard; Duff, Kevin; Factor, Stewart; Foroud, Tatiana; Furtado, Sarah; Giuliano, Joe; Greenamyre, Timothy; Higgins, Don; Jankovic, Joseph; Jennings, Dana; Kang, Un Jung; Kostyk, Sandra; Kumar, Rajeev; Leavitt, Blair; LeDoux, Mark; Mallonee, William; Marshall, Frederick; Mohlo, Eric; Morgan, John; Oakes, David; Panegyres, Peter; Panisset, Michel; Perlman, Susan; Perlmutter, Joel; Quaid, Kimberly; Raymond, Lynn; Revilla, Fredy; Robertson, Suzanne; Robottom, Bradley; Sanchez-Ramos, Juan; Scott, Burton; Shannon, Kathleen; Shoulson, Ira; Singer, Carlos; Tabbal, Samer; Testa, Claudia; van, Kammen Dan; Vetter, Louise; Walker, Francis; Warner, John; Weiner, illiam; Wheelock, Vicki; Yastrubetskaya, Olga; Barton, Stacey; Broyles, Janice; Clouse, Ronda; Coleman, Allison; Davis, Robert; Decolongon, Joji; DeLaRosa, Jeanene; Deuel, Lisa; Dietrich, Susan; Dubinsky, Hilary; Eaton, Ken; Erickson, Diane; Fitzpatrick, Mary Jane; Frucht, Steven; Gartner, Maureen; Goldstein, Jody; Griffith, Jane; Hickey, Charlyne; Hunt, Victoria; Jaglin, Jeana; Klimek, Mary Lou; Lindsay, Pat; Louis, Elan; Loy, Clemet; Lucarelli, Nancy; Malarick, Keith; Martin, Amanda; McInnis, Robert; Moskowitz, Carol; Muratori, Lisa; Nucifora, Frederick; O'Neill, Christine; Palao, Alicia; Peavy, Guerry; Quesada, Monica; Schmidt, Amy; Segro, Vicki; Sperin, Elaine; Suter, Greg; Tanev, Kalo; Tempkin, Teresa; Thiede, Curtis; Wasserman, Paula; Welsh, Claire; Wesson, Melissa; Zauber, Elizabeth

    2012-01-01

    Objective: Age at onset of diagnostic motor manifestations in Huntington disease (HD) is strongly correlated with an expanded CAG trinucleotide repeat. The length of the normal CAG repeat allele has been reported also to influence age at onset, in interaction with the expanded allele. Due to profound implications for disease mechanism and modification, we tested whether the normal allele, interaction between the expanded and normal alleles, or presence of a second expanded allele affects age at onset of HD motor signs. Methods: We modeled natural log-transformed age at onset as a function of CAG repeat lengths of expanded and normal alleles and their interaction by linear regression. Results: An apparently significant effect of interaction on age at motor onset among 4,068 subjects was dependent on a single outlier data point. A rigorous statistical analysis with a well-behaved dataset that conformed to the fundamental assumptions of linear regression (e.g., constant variance and normally distributed error) revealed significance only for the expanded CAG repeat, with no effect of the normal CAG repeat. Ten subjects with 2 expanded alleles showed an age at motor onset consistent with the length of the larger expanded allele. Conclusions: Normal allele CAG length, interaction between expanded and normal alleles, and presence of a second expanded allele do not influence age at onset of motor manifestations, indicating that the rate of HD pathogenesis leading to motor diagnosis is determined by a completely dominant action of the longest expanded allele and as yet unidentified genetic or environmental factors. Neurology® 2012;78:690–695 PMID:22323755

  17. Evaluation of electrolytic tilt sensors for measuring model angle of attack in wind tunnel tests

    NASA Technical Reports Server (NTRS)

    Wong, Douglas T.

    1992-01-01

    The results of a laboratory evaluation of electrolytic tilt sensors as potential candidates for measuring model attitude or angle of attack in wind tunnel tests are presented. The performance of eight electrolytic tilt sensors was compared with that of typical servo accelerometers used for angle-of-attack measurements. The areas evaluated included linearity, hysteresis, repeatability, temperature characteristics, roll-on-pitch interaction, sensitivity to lead-wire resistance, step response time, and rectification. Among the sensors being evaluated, the Spectron model RG-37 electrolytic tilt sensors have the highest overall accuracy in terms of linearity, hysteresis, repeatability, temperature sensitivity, and roll sensitivity. A comparison of the sensors with the servo accelerometers revealed that the accuracy of the RG-37 sensors was on the average about one order of magnitude worse. Even though a comparison indicates that the cost of each tilt sensor is about one-third the cost of each servo accelerometer, the sensors are considered unsuitable for angle-of-attack measurements. However, the potential exists for other applications such as wind tunnel wall-attitude measurements where the errors resulting from roll interaction, vibration, and response time are less and sensor temperature can be controlled.

  18. On Partial Fraction Decompositions by Repeated Polynomial Divisions

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong

    2017-01-01

    We present a method for finding partial fraction decompositions of rational functions with linear or quadratic factors in the denominators by means of repeated polynomial divisions. This method does not involve differentiation or solving linear equations for obtaining the unknown partial fraction coefficients, which is very suitable for either…

  19. ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.

    PubMed

    Lee, Keunbaik; Baek, Changryong; Daniels, Michael J

    2017-11-01

    In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.

  20. On summary measure analysis of linear trend repeated measures data: performance comparison with two competing methods.

    PubMed

    Vossoughi, Mehrdad; Ayatollahi, S M T; Towhidi, Mina; Ketabchi, Farzaneh

    2012-03-22

    The summary measure approach (SMA) is sometimes the only applicable tool for the analysis of repeated measurements in medical research, especially when the number of measurements is relatively large. This study aimed to describe techniques based on summary measures for the analysis of linear trend repeated measures data and then to compare performances of SMA, linear mixed model (LMM), and unstructured multivariate approach (UMA). Practical guidelines based on the least squares regression slope and mean of response over time for each subject were provided to test time, group, and interaction effects. Through Monte Carlo simulation studies, the efficacy of SMA vs. LMM and traditional UMA, under different types of covariance structures, was illustrated. All the methods were also employed to analyze two real data examples. Based on the simulation and example results, it was found that the SMA completely dominated the traditional UMA and performed convincingly close to the best-fitting LMM in testing all the effects. However, the LMM was not often robust and led to non-sensible results when the covariance structure for errors was misspecified. The results emphasized discarding the UMA which often yielded extremely conservative inferences as to such data. It was shown that summary measure is a simple, safe and powerful approach in which the loss of efficiency compared to the best-fitting LMM was generally negligible. The SMA is recommended as the first choice to reliably analyze the linear trend data with a moderate to large number of measurements and/or small to moderate sample sizes.

  1. Multi-disease analysis of maternal antibody decay using non-linear mixed models accounting for censoring.

    PubMed

    Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel

    2015-09-10

    Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Effects of Expressive Writing Effects on Disgust and Anxiety in a Subsequent Dissection

    NASA Astrophysics Data System (ADS)

    Randler, Christoph; Wüst-Ackermann, Peter; im Kampe, Viola Otte; Meyer-Ahrens, Inga H.; Tempel, Benjamin J.; Vollmer, Christian

    2015-10-01

    Emotions influence motivation and achievement, but negative emotions have rarely been assessed in science education. In this study, we assessed the influence of two different expressive writing assignments on disgust and anxiety in university students prior to the dissection of a trout. We randomly assigned students to one of two expressive writing tasks and measured specific state disgust and state anxiety after writing and after the dissection. Specific state disgust was measured a third time after 3 weeks. One writing task was concerned with the dissection, and the other was related to behavioral experiments with mice. We used two general linear models with repeated measures. In the first model, specific state disgust (pre, post, and follow-up) was used as the dependent repeated measure and experimental group as the independent variable. In the second model, state anxiety was used as the dependent repeated measure (pre, post) with experimental group as the independent variable. The repeated testing showed a highly significant effect of experimental group on the repeated measures of disgust. Writing about worries and emotions concerning the dissection leads to higher disgust scores compared to writing about mice. These higher scores persisted even 3 weeks later in the follow-up test. Concerning anxiety, there was a clear influence of the repeated measure of state anxiety, but anxiety was not influenced by the experimental group. We suggest that positive writing should be used in educational contexts to reduce disgust.

  3. Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.

    2013-01-01

    Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689

  4. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  5. Estimating linear effects in ANOVA designs: the easy way.

    PubMed

    Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana

    2012-09-01

    Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.

  6. Repeated Challenge Studies: A Comparison of Union-Intersection Testing with Linear Modeling.

    ERIC Educational Resources Information Center

    Levine, Richard A.; Ohman, Pamela A.

    1997-01-01

    Challenge studies can be used to see whether there is a causal relationship between an agent of interest and a response. An approach based on union-intersection testing is presented that allows researchers to examine observations on a single subject and test the hypothesis of interest. An application using psychological data is presented. (SLD)

  7. Within-Subject Comparison of Changes in a Pretest-Posttest Design

    ERIC Educational Resources Information Center

    Hennig, Christian; Mullensiefen, Daniel; Bargmann, Jens

    2010-01-01

    The authors propose a method to compare the influence of a treatment on different properties within subjects. The properties are measured by several Likert-type-scaled items. The results show that many existing approaches, such as repeated measurement analysis of variance on sum and mean scores, a linear partial credit model, and a graded response…

  8. Effective Use of Multimedia Presentations to Maximize Learning within High School Science Classrooms

    ERIC Educational Resources Information Center

    Rapp, Eric

    2013-01-01

    This research used an evidenced-based experimental 2 x 2 factorial design General Linear Model with Repeated Measures Analysis of Covariance (RMANCOVA). For this analysis, time served as the within-subjects factor while treatment group (i.e., static and signaling, dynamic and signaling, static without signaling, and dynamic without signaling)…

  9. The Impact of Repeat HIV Testing on Risky Sexual Behavior: Evidence from a Randomized Controlled Trial in Malawi

    PubMed Central

    Delavande, Adeline; Wagner, Zachary; Sood, Neeraj

    2016-01-01

    A significant proportion of HIV-positive adults in sub-Saharan Africa are in serodiscordant relationships. Identification of such serodiscordant couples through couple HIV testing and counseling (HTC) is thought to promote safe sexual behavior and reduce the probability of within couple seroconversion. However, it is possible HTC benefits are not sustained over time and therefore repeated HTC may be more effective at preventing seroconversion than one time HTC. We tested this theory in Zomba, Malawi by randomly assigning 170 serodiscordant couples to receive repeated HTC and 167 serodiscordant couples to receive one time HTC upon study enrollment (control group). We used linear probability models and probit model with couple fixed effects to assess the impact of the intervention on risky sexual behavior. At one-year follow-up, we found that couples that received repeated HTC reported significantly more condom use. However, we found no difference in rate of seroconversion between groups, nor did we find differences in subjective expectations about seroconversion or false beliefs about HIV, two expected pathways of behavior change. We conclude that repeated HTC may promote safe sexual behavior, but this result should be interpreted with caution, as it is inconsistent with the result from biological and subjective outcomes. PMID:27158553

  10. The Impact of Repeat HIV Testing on Risky Sexual Behavior: Evidence from a Randomized Controlled Trial in Malawi.

    PubMed

    Delavande, Adeline; Wagner, Zachary; Sood, Neeraj

    2016-03-01

    A significant proportion of HIV-positive adults in sub-Saharan Africa are in serodiscordant relationships. Identification of such serodiscordant couples through couple HIV testing and counseling (HTC) is thought to promote safe sexual behavior and reduce the probability of within couple seroconversion. However, it is possible HTC benefits are not sustained over time and therefore repeated HTC may be more effective at preventing seroconversion than one time HTC. We tested this theory in Zomba, Malawi by randomly assigning 170 serodiscordant couples to receive repeated HTC and 167 serodiscordant couples to receive one time HTC upon study enrollment (control group). We used linear probability models and probit model with couple fixed effects to assess the impact of the intervention on risky sexual behavior. At one-year follow-up, we found that couples that received repeated HTC reported significantly more condom use. However, we found no difference in rate of seroconversion between groups, nor did we find differences in subjective expectations about seroconversion or false beliefs about HIV, two expected pathways of behavior change. We conclude that repeated HTC may promote safe sexual behavior, but this result should be interpreted with caution, as it is inconsistent with the result from biological and subjective outcomes.

  11. Bayesian quantile regression-based partially linear mixed-effects joint models for longitudinal data with multiple features.

    PubMed

    Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara

    2017-01-01

    In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.

  12. Direct Observation of Parallel Folding Pathways Revealed Using a Symmetric Repeat Protein System

    PubMed Central

    Aksel, Tural; Barrick, Doug

    2014-01-01

    Although progress has been made to determine the native fold of a polypeptide from its primary structure, the diversity of pathways that connect the unfolded and folded states has not been adequately explored. Theoretical and computational studies predict that proteins fold through parallel pathways on funneled energy landscapes, although experimental detection of pathway diversity has been challenging. Here, we exploit the high translational symmetry and the direct length variation afforded by linear repeat proteins to directly detect folding through parallel pathways. By comparing folding rates of consensus ankyrin repeat proteins (CARPs), we find a clear increase in folding rates with increasing size and repeat number, although the size of the transition states (estimated from denaturant sensitivity) remains unchanged. The increase in folding rate with chain length, as opposed to a decrease expected from typical models for globular proteins, is a clear demonstration of parallel pathways. This conclusion is not dependent on extensive curve-fitting or structural perturbation of protein structure. By globally fitting a simple parallel-Ising pathway model, we have directly measured nucleation and propagation rates in protein folding, and have quantified the fluxes along each path, providing a detailed energy landscape for folding. This finding of parallel pathways differs from results from kinetic studies of repeat-proteins composed of sequence-variable repeats, where modest repeat-to-repeat energy variation coalesces folding into a single, dominant channel. Thus, for globular proteins, which have much higher variation in local structure and topology, parallel pathways are expected to be the exception rather than the rule. PMID:24988356

  13. Use of non-linear mixed-effects modelling and regression analysis to predict the number of somatic coliphages by plaque enumeration after 3 hours of incubation.

    PubMed

    Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco

    2017-10-01

    The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was <4 PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.

  14. Do Stress Trajectories Predict Mortality in Older Men? Longitudinal Findings from the VA Normative Aging Study

    PubMed Central

    Aldwin, Carolyn M.; Molitor, Nuoo-Ting; Avron, Spiro; Levenson, Michael R.; Molitor, John; Igarashi, Heidi

    2011-01-01

    We examined long-term patterns of stressful life events (SLE) and their impact on mortality contrasting two theoretical models: allostatic load (linear relationship) and hormesis (inverted U relationship) in 1443 NAS men (aged 41–87 in 1985; M = 60.30, SD = 7.3) with at least two reports of SLEs over 18 years (total observations = 7,634). Using a zero-inflated Poisson growth mixture model, we identified four patterns of SLE trajectories, three showing linear decreases over time with low, medium, and high intercepts, respectively, and one an inverted U, peaking at age 70. Repeating the analysis omitting two health-related SLEs yielded only the first three linear patterns. Compared to the low-stress group, both the moderate and the high-stress groups showed excess mortality, controlling for demographics and health behavior habits, HRs = 1.42 and 1.37, ps <.01 and <.05. The relationship between stress trajectories and mortality was complex and not easily explained by either theoretical model. PMID:21961066

  15. Development and Preliminary Testing of a High Precision Long Stroke Slit Change Mechanism for the SPICE Instrument

    NASA Technical Reports Server (NTRS)

    Paciotti, Gabriel; Humphries, Martin; Rottmeier, Fabrice; Blecha, Luc

    2014-01-01

    In the frame of ESA's Solar Orbiter scientific mission, Almatech has been selected to design, develop and test the Slit Change Mechanism of the SPICE (SPectral Imaging of the Coronal Environment) instrument. In order to guaranty optical cleanliness level while fulfilling stringent positioning accuracies and repeatability requirements for slit positioning in the optical path of the instrument, a linear guiding system based on a double flexible blade arrangement has been selected. The four different slits to be used for the SPICE instrument resulted in a total stroke of 16.5 mm in this linear slit changer arrangement. The combination of long stroke and high precision positioning requirements has been identified as the main design challenge to be validated through breadboard models testing. This paper presents the development of SPICE's Slit Change Mechanism (SCM) and the two-step validation tests successfully performed on breadboard models of its flexible blade support system. The validation test results have demonstrated the full adequacy of the flexible blade guiding system implemented in SPICE's Slit Change Mechanism in a stand-alone configuration. Further breadboard test results, studying the influence of the compliant connection to the SCM linear actuator on an enhanced flexible guiding system design have shown significant enhancements in the positioning accuracy and repeatability of the selected flexible guiding system. Preliminary evaluation of the linear actuator design, including a detailed tolerance analyses, has shown the suitability of this satellite roller screw based mechanism for the actuation of the tested flexible guiding system and compliant connection. The presented development and preliminary testing of the high-precision long-stroke Slit Change Mechanism for the SPICE Instrument are considered fully successful such that future tests considering the full Slit Change Mechanism can be performed, with the gained confidence, directly on a Qualification Model. The selected linear Slit Change Mechanism design concept, consisting of a flexible guiding system driven by a hermetically sealed linear drive mechanism, is considered validated for the specific application of the SPICE instrument, with great potential for other special applications where contamination and high precision positioning are dominant design drivers.

  16. Genome-Wide Stochastic Adaptive DNA Amplification at Direct and Inverted DNA Repeats in the Parasite Leishmania

    PubMed Central

    Plourde, Marie; Gingras, Hélène; Roy, Gaétan; Lapointe, Andréanne; Leprohon, Philippe; Papadopoulou, Barbara; Corbeil, Jacques; Ouellette, Marc

    2014-01-01

    Gene amplification of specific loci has been described in all kingdoms of life. In the protozoan parasite Leishmania, the product of amplification is usually part of extrachromosomal circular or linear amplicons that are formed at the level of direct or inverted repeated sequences. A bioinformatics screen revealed that repeated sequences are widely distributed in the Leishmania genome and the repeats are chromosome-specific, conserved among species, and generally present in low copy number. Using sensitive PCR assays, we provide evidence that the Leishmania genome is continuously being rearranged at the level of these repeated sequences, which serve as a functional platform for constitutive and stochastic amplification (and deletion) of genomic segments in the population. This process is adaptive as the copy number of advantageous extrachromosomal circular or linear elements increases upon selective pressure and is reversible when selection is removed. We also provide mechanistic insights on the formation of circular and linear amplicons through RAD51 recombinase-dependent and -independent mechanisms, respectively. The whole genome of Leishmania is thus stochastically rearranged at the level of repeated sequences, and the selection of parasite subpopulations with changes in the copy number of specific loci is used as a strategy to respond to a changing environment. PMID:24844805

  17. On the global well-posedness theory for a class of PDE models for criminal activity

    NASA Astrophysics Data System (ADS)

    Rodríguez, N.

    2013-10-01

    We study a class of ‘reaction-advection-diffusion’ system of partial differential equations, which can be taken as basic models for criminal activity. This class of models are based on routine activity theory and other theories, such as the ‘repeat and near-repeat victimization effect’ and were first introduced in Short et al. (2008) [11]. In these models the criminal density is advected by a velocity field that depends on a scalar field, which measures the appeal to commit a crime. We refer to this scalar field as the attractiveness field. We prove local well-posedness of solutions for the general class of models. Furthermore, we prove global well-posedness of solutions to a fully-parabolic system with a velocity field that depends logarithmically on the attractiveness field. Our final result is the global well-posedness of solutions the fully-parabolic system with velocity field that depends linearly on the attractiveness field for small initial mass.

  18. Quantitative MRI establishes the efficacy of PI3K inhibitor (GDC-0941) multi-treatments in PTEN-deficient mice lymphoma.

    PubMed

    Wullschleger, Stephan; García-Martínez, Juan M; Duce, Suzanne L

    2012-02-01

    To assess the efficacy of multiple treatment of phosphatidylinositol-3-kinase (PI3K) inhibitor on autochthonous tumours in phosphatase and tensin homologue (Pten)-deficient genetically engineered mouse cancer models using a longitudinal magnetic resonance imaging (MRI) protocol. Using 3D MRI, B-cell follicular lymphoma growth was quantified in a Pten(+/-)Lkb1(+/hypo) mouse line, before, during and after repeated treatments with a PI3K inhibitor GDC-0941 (75 mg/kg). Mean pre-treatment linear tumour growth rate was 16.5±12.8 mm(3)/week. Repeated 28-day GDC-0941 administration, with 21 days 'off-treatment', induced average tumour regression of 41±7%. Upon cessation of the second treatment (which was not permanently cytocidal), tumours re-grew with an average linear growth rate of 40.1±15.5 mm(3)/week. There was no evidence of chemoresistance. This protocol can accommodate complex dosing schedules, as well as combine different cancer therapies. It reduces biological variability problems and resulted in a 10-fold reduction in mouse numbers compared with terminal assessment methods. It is ideal for preclinical efficacy studies and for phenotyping molecularly characterized mouse models when investigating gene function.

  19. Application of laser scanning technique in earthquake protection of Istanbul's historical heritage buildings

    NASA Astrophysics Data System (ADS)

    Çaktı, Eser; Ercan, Tülay; Dar, Emrullah

    2017-04-01

    Istanbul's vast historical and cultural heritage is under constant threat of earthquakes. Historical records report repeated damages to the city's landmark buildings. Our efforts towards earthquake protection of several buildings in Istanbul involve earthquake monitoring via structural health monitoring systems, linear and non-linear structural modelling and analysis in search of past and future earthquake performance, shake-table testing of scaled models and non-destructive testing. More recently we have been using laser technology in monitoring structural deformations and damage in five monumental buildings which are Hagia Sophia Museum and Fatih, Sultanahmet, Süleymaniye and Mihrimah Sultan Mosques. This presentation is about these efforts with special emphasis on the use of laser scanning in monitoring of edifices.

  20. Estimating population trends with a linear model

    USGS Publications Warehouse

    Bart, Jonathan; Collins, Brian D.; Morrison, R.I.G.

    2003-01-01

    We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.

  1. The Physiological Molecular Shape of Spectrin: A Compact Supercoil Resembling a Chinese Finger Trap.

    PubMed

    Brown, Jeffrey W; Bullitt, Esther; Sriswasdi, Sira; Harper, Sandra; Speicher, David W; McKnight, C James

    2015-06-01

    The primary, secondary, and tertiary structures of spectrin are reasonably well defined, but the structural basis for the known dramatic molecular shape change, whereby the molecular length can increase three-fold, is not understood. In this study, we combine previously reported biochemical and high-resolution crystallographic data with structural mass spectroscopy and electron microscopic data to derive a detailed, experimentally-supported quaternary structure of the spectrin heterotetramer. In addition to explaining spectrin's physiological resting length of ~55-65 nm, our model provides a mechanism by which spectrin is able to undergo a seamless three-fold extension while remaining a linear filament, an experimentally observed property. According to the proposed model, spectrin's quaternary structure and mechanism of extension is similar to a Chinese Finger Trap: at shorter molecular lengths spectrin is a hollow cylinder that extends by increasing the pitch of each spectrin repeat, which decreases the internal diameter. We validated our model with electron microscopy, which demonstrated that, as predicted, spectrin is hollow at its biological resting length of ~55-65 nm. The model is further supported by zero-length chemical crosslink data indicative of an approximately 90 degree bend between adjacent spectrin repeats. The domain-domain interactions in our model are entirely consistent with those present in the prototypical linear antiparallel heterotetramer as well as recently reported inter-strand chemical crosslinks. The model is consistent with all known physical properties of spectrin, and upon full extension our Chinese Finger Trap Model reduces to the ~180-200 nm molecular model currently in common use.

  2. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  3. Short intronic repeat sequences facilitate circular RNA production

    PubMed Central

    Liang, Dongming

    2014-01-01

    Recent deep sequencing studies have revealed thousands of circular noncoding RNAs generated from protein-coding genes. These RNAs are produced when the precursor messenger RNA (pre-mRNA) splicing machinery “backsplices” and covalently joins, for example, the two ends of a single exon. However, the mechanism by which the spliceosome selects only certain exons to circularize is largely unknown. Using extensive mutagenesis of expression plasmids, we show that miniature introns containing the splice sites along with short (∼30- to 40-nucleotide) inverted repeats, such as Alu elements, are sufficient to allow the intervening exons to circularize in cells. The intronic repeats must base-pair to one another, thereby bringing the splice sites into close proximity to each other. More than simple thermodynamics is clearly at play, however, as not all repeats support circularization, and increasing the stability of the hairpin between the repeats can sometimes inhibit circular RNA biogenesis. The intronic repeats and exonic sequences must collaborate with one another, and a functional 3′ end processing signal is required, suggesting that circularization may occur post-transcriptionally. These results suggest detailed and generalizable models that explain how the splicing machinery determines whether to produce a circular noncoding RNA or a linear mRNA. PMID:25281217

  4. A discourse on sensitivity analysis for discretely-modeled structures

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  5. Noninvasive and fast measurement of blood glucose in vivo by near infrared (NIR) spectroscopy

    NASA Astrophysics Data System (ADS)

    Jintao, Xue; Liming, Ye; Yufei, Liu; Chunyan, Li; Han, Chen

    2017-05-01

    This research was to develop a method for noninvasive and fast blood glucose assay in vivo. Near-infrared (NIR) spectroscopy, a more promising technique compared to other methods, was investigated in rats with diabetes and normal rats. Calibration models are generated by two different multivariate strategies: partial least squares (PLS) as linear regression method and artificial neural networks (ANN) as non-linear regression method. The PLS model was optimized individually by considering spectral range, spectral pretreatment methods and number of model factors, while the ANN model was studied individually by selecting spectral pretreatment methods, parameters of network topology, number of hidden neurons, and times of epoch. The results of the validation showed the two models were robust, accurate and repeatable. Compared to the ANN model, the performance of the PLS model was much better, with lower root mean square error of validation (RMSEP) of 0.419 and higher correlation coefficients (R) of 96.22%.

  6. Comparisons of the underlying mechanisms of left atrial remodeling after repeat circumferential pulmonary vein isolation with or without additional left atrial linear ablation in patients with recurrent atrial fibrillation.

    PubMed

    Yang, Chia-Hung; Chou, Chung-Chuan; Hung, Kuo-Chun; Wen, Ming-Shien; Chang, Po-Cheng; Wo, Hung-Ta; Lee, Cheng-Hung; Lin, Fen-Chiung

    2017-02-01

    Radiofrequency catheter ablation (RFCA) is a potentially curative treatment for atrial fibrillation (AF), however, whether or not additional left atrial (LA) linear ablation for recurrent AF adversely affects LA remodeling is unknown. Thirty-eight patients experiencing AF recurrence after the 1st circumferential pulmonary vein isolation (CPVI) underwent a repeat RFCA, including 20 and 18 patients receiving a repeat CPVI (group I) or CPVI plus LA linear ablation (group II), respectively. 2-D echocardiography was performed during sinus rhythm within 24h, at 1-m and 6-m after RFCA. Longitudinal strains and strain rate were measured with speckle-tracking echocardiography. The standard deviation of contraction duration was defined as LA mechanical dispersion. One and two patients experienced AF recurrence after the 2nd RFCA in group I and II, respectively (P=NS). The 1st CPVI with AF recurrence did not reduce LA size significantly in two groups. After a repeat CPVI, LA diameter but not LA maximal and minimal volume was significantly reduced in group I; additional LA linear ablation significantly decreased LA diameter, maximal and minimal volume in group II. However, there was no significant difference in LA emptying function, global and segmental LA strain and strain rate among the baseline, 1-m and 6-m follow-up in two groups. RFCA did not significantly increase LA mechanical dispersion regardless of the AF ablation strategies. In patients with recurrent AF, a successful repeat CPVI with or without additional LA linear ablation reduced LA size without significant deleterious effects on LA function and mechanical dispersion. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Physiologically Based Pharmacokinetic Modeling of the Lactating Rat and Nursing Pup: a Multiroute Exposure Model for Trichloroethylene and its Metabolite, Trichloroacetic Acid

    DTIC Science & Technology

    1990-01-01

    cumulated during pregnancy was described as a linear animal (allometrically scaled), was estimated by comput- process changing from 12.0% of body weight of...biochemical effects of TCE in neonatal pregnancy (Fisher et al., 1989) were used for repeated- rats born to dams exposed to TCE via drink- exposure...studies during lactation. Female cesarean-de- rived Fischer-344 rats, obtained from Charles Rivering water during pregnancy and lactation. Breeding

  8. Parametrically excited non-linear multidegree-of-freedom systems with repeated natural frequencies

    NASA Astrophysics Data System (ADS)

    Tezak, E. G.; Nayfeh, A. H.; Mook, D. T.

    1982-12-01

    A method for analyzing multidegree-of-freedom systems having a repeated natural frequency subjected to a parametric excitation is presented. Attention is given to the ordering of the various terms (linear and non-linear) in the governing equations. The analysis is based on the method of multiple scales. As a numerical example involving a parametric resonance, panel flutter is discussed in detail in order to illustrate the type of results one can expect to obtain with this analysis. Some of the analytical results are verified by a numerical integration of the governing equations.

  9. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    PubMed

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  10. Deletion of internal structured repeats increases the stability of a leucine-rich repeat protein, YopM

    PubMed Central

    Barrick, Doug

    2011-01-01

    Mapping the stability distributions of proteins in their native folded states provides a critical link between structure, thermodynamics, and function. Linear repeat proteins have proven more amenable to this kind of mapping than globular proteins. C-terminal deletion studies of YopM, a large, linear leucine-rich repeat (LRR) protein, show that stability is distributed quite heterogeneously, yet a high level of cooperativity is maintained [1]. Key components of this distribution are three interfaces that strongly stabilize adjacent sequences, thereby maintaining structural integrity and promoting cooperativity. To better understand the distribution of interaction energy around these critical interfaces, we studied internal (rather than terminal) deletions of three LRRs in this region, including one of these stabilizing interfaces. Contrary to our expectation that deletion of structured repeats should be destabilizing, we find that internal deletion of folded repeats can actually stabilize the native state, suggesting that these repeats are destabilizing, although paradoxically, they are folded in the native state. We identified two residues within this destabilizing segment that deviate from the consensus sequence at a position that normally forms a stacked leucine ladder in the hydrophobic core. Replacement of these nonconsensus residues with leucine is stabilizing. This stability enhancement can be reproduced in the context of nonnative interfaces, but it requires an extended hydrophobic core. Our results demonstrate that different LRRs vary widely in their contribution to stability, and that this variation is context-dependent. These two factors are likely to determine the types of rearrangements that lead to folded, functional proteins, and in turn, are likely to restrict the pathways available for the evolution of linear repeat proteins. PMID:21764506

  11. A Case Study on the Application of a Structured Experimental Method for Optimal Parameter Design of a Complex Control System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.

  12. Modeling Information Content Via Dirichlet-Multinomial Regression Analysis.

    PubMed

    Ferrari, Alberto

    2017-01-01

    Shannon entropy is being increasingly used in biomedical research as an index of complexity and information content in sequences of symbols, e.g. languages, amino acid sequences, DNA methylation patterns and animal vocalizations. Yet, distributional properties of information entropy as a random variable have seldom been the object of study, leading to researchers mainly using linear models or simulation-based analytical approach to assess differences in information content, when entropy is measured repeatedly in different experimental conditions. Here a method to perform inference on entropy in such conditions is proposed. Building on results coming from studies in the field of Bayesian entropy estimation, a symmetric Dirichlet-multinomial regression model, able to deal efficiently with the issue of mean entropy estimation, is formulated. Through a simulation study the model is shown to outperform linear modeling in a vast range of scenarios and to have promising statistical properties. As a practical example, the method is applied to a data set coming from a real experiment on animal communication.

  13. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  14. Quantitative MRI Establishes the Efficacy of PI3K Inhibitor (GDC-0941) Multi-Treatments in PTEN-deficient Mice Lymphoma

    PubMed Central

    WULLSCHLEGER, STEPHAN; GARCÍA-MARTÍNEZ, JUAN M.; DUCE, SUZANNE L.

    2012-01-01

    Aim To assess the efficacy of multiple treatment of phosphatidylinositol-3-kinase (PI3K) inhibitor on autochthonous tumours in phosphatase and tensin homologue (Pten)-deficient genetically engineered mouse cancer models using a longitudinal magnetic resonance imaging (MRI) protocol. Materials and Methods Using 3D MRI, B-cell follicular lymphoma growth was quantified in a Pten+/−Lkb1+/hypo mouse line, before, during and after repeated treatments with a PI3K inhibitor GDC-0941 (75 mg/kg). Results Mean pre-treatment linear tumour growth rate was 16.5±12.8 mm3/week. Repeated 28-day GDC-0941 administration, with 21 days “off-treatment”, induced average tumour regression of 41±7%. Upon cessation of the second treatment (which was not permanently cytocidal), tumours re-grew with an average linear growth rate of 40.1±15.5 mm3/week. There was no evidence of chemoresistance. Conclusion This protocol can accommodate complex dosing schedules, as well as combine different cancer therapies. It reduces biological variability problems and resulted in a 10-fold reduction in mouse numbers compared with terminal assessment methods. It is ideal for preclinical efficacy studies and for phenotyping molecularly characterized mouse models when investigating gene function. PMID:22287727

  15. Short intronic repeat sequences facilitate circular RNA production.

    PubMed

    Liang, Dongming; Wilusz, Jeremy E

    2014-10-15

    Recent deep sequencing studies have revealed thousands of circular noncoding RNAs generated from protein-coding genes. These RNAs are produced when the precursor messenger RNA (pre-mRNA) splicing machinery "backsplices" and covalently joins, for example, the two ends of a single exon. However, the mechanism by which the spliceosome selects only certain exons to circularize is largely unknown. Using extensive mutagenesis of expression plasmids, we show that miniature introns containing the splice sites along with short (∼ 30- to 40-nucleotide) inverted repeats, such as Alu elements, are sufficient to allow the intervening exons to circularize in cells. The intronic repeats must base-pair to one another, thereby bringing the splice sites into close proximity to each other. More than simple thermodynamics is clearly at play, however, as not all repeats support circularization, and increasing the stability of the hairpin between the repeats can sometimes inhibit circular RNA biogenesis. The intronic repeats and exonic sequences must collaborate with one another, and a functional 3' end processing signal is required, suggesting that circularization may occur post-transcriptionally. These results suggest detailed and generalizable models that explain how the splicing machinery determines whether to produce a circular noncoding RNA or a linear mRNA. © 2014 Liang and Wilusz; Published by Cold Spring Harbor Laboratory Press.

  16. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1982-01-01

    Progress made in reducing MAGSAT data and displaying magnetic field perturbations caused primarily by external currents is reported. A periodic and repeatable perturbation pattern is described that arises from external current effects but appears as unique signatures associated with upper middle latitudes on the Earth's surface. Initial testing of the modeling procedure that was developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is also discussed. The modeling technique utilizes a linear current element representation of the large scale space current system.

  17. Repeatability and consistency of individual behaviour in juvenile and adult Eurasian harvest mice

    NASA Astrophysics Data System (ADS)

    Schuster, Andrea C.; Carl, Teresa; Foerster, Katharina

    2017-04-01

    Knowledge on animal personality has provided new insights into evolutionary biology and animal ecology, as behavioural types have been shown to affect fitness. Animal personality is characterized by repeatable and consistent between-individual behavioural differences throughout time and across different situations. Behavioural repeatability within life history stages and consistency between life history stages should be checked for the independence of sex and age, as recent data have shown that males and females in some species may differ in the repeatability of behavioural traits, as well as in their consistency. We measured the repeatability and consistency of three behavioural and one cognitive traits in juvenile and adult Eurasian harvest mice ( Micromys minutus). We found that exploration, activity and boldness were repeatable in juveniles and adults. Spatial recognition measured in a Y Maze was only repeatable in adult mice. Exploration, activity and boldness were consistent before and after maturation, as well as before and after first sexual contact. Data on spatial recognition provided little evidence for consistency. Further, we found some evidence for a litter effect on behaviours by comparing different linear mixed models. We concluded that harvest mice express animal personality traits as behaviours were repeatable across sexes and consistent across life history stages. The tested cognitive trait showed low repeatability and was less consistent across life history stages. Given the rising interest in individual variation in cognitive performance, and in its relationship to animal personality, we suggest that it is important to gather more data on the repeatability and consistency of cognitive traits.

  18. Recombination-dependent replication and gene conversion homogenize repeat sequences and diversify plastid genome structure.

    PubMed

    Ruhlman, Tracey A; Zhang, Jin; Blazier, John C; Sabir, Jamal S M; Jansen, Robert K

    2017-04-01

    There is a misinterpretation in the literature regarding the variable orientation of the small single copy region of plastid genomes (plastomes). The common phenomenon of small and large single copy inversion, hypothesized to occur through intramolecular recombination between inverted repeats (IR) in a circular, single unit-genome, in fact, more likely occurs through recombination-dependent replication (RDR) of linear plastome templates. If RDR can be primed through both intra- and intermolecular recombination, then this mechanism could not only create inversion isomers of so-called single copy regions, but also an array of alternative sequence arrangements. We used Illumina paired-end and PacBio single-molecule real-time (SMRT) sequences to characterize repeat structure in the plastome of Monsonia emarginata (Geraniaceae). We used OrgConv and inspected nucleotide alignments to infer ancestral nucleotides and identify gene conversion among repeats and mapped long (>1 kb) SMRT reads against the unit-genome assembly to identify alternative sequence arrangements. Although M. emarginata lacks the canonical IR, we found that large repeats (>1 kilobase; kb) represent ∼22% of the plastome nucleotide content. Among the largest repeats (>2 kb), we identified GC-biased gene conversion and mapping filtered, long SMRT reads to the M. emarginata unit-genome assembly revealed alternative, substoichiometric sequence arrangements. We offer a model based on RDR and gene conversion between long repeated sequences in the M. emarginata plastome and provide support that both intra-and intermolecular recombination between large repeats, particularly in repeat-rich plastomes, varies unit-genome structure while homogenizing the nucleotide sequence of repeats. © 2017 Botanical Society of America.

  19. Two-Hierarchy Entanglement Swapping for a Linear Optical Quantum Repeater

    NASA Astrophysics Data System (ADS)

    Xu, Ping; Yong, Hai-Lin; Chen, Luo-Kan; Liu, Chang; Xiang, Tong; Yao, Xing-Can; Lu, He; Li, Zheng-Da; Liu, Nai-Le; Li, Li; Yang, Tao; Peng, Cheng-Zhi; Zhao, Bo; Chen, Yu-Ao; Pan, Jian-Wei

    2017-10-01

    Quantum repeaters play a significant role in achieving long-distance quantum communication. In the past decades, tremendous effort has been devoted towards constructing a quantum repeater. As one of the crucial elements, entanglement has been created in different memory systems via entanglement swapping. The realization of j -hierarchy entanglement swapping, i.e., connecting quantum memory and further extending the communication distance, is important for implementing a practical quantum repeater. Here, we report the first demonstration of a fault-tolerant two-hierarchy entanglement swapping with linear optics using parametric down-conversion sources. In the experiment, the dominant or most probable noise terms in the one-hierarchy entanglement swapping, which is on the same order of magnitude as the desired state and prevents further entanglement connections, are automatically washed out by a proper design of the detection setting, and the communication distance can be extended. Given suitable quantum memory, our techniques can be directly applied to implementing an atomic ensemble based quantum repeater, and are of significant importance in the scalable quantum information processing.

  20. Two-Hierarchy Entanglement Swapping for a Linear Optical Quantum Repeater.

    PubMed

    Xu, Ping; Yong, Hai-Lin; Chen, Luo-Kan; Liu, Chang; Xiang, Tong; Yao, Xing-Can; Lu, He; Li, Zheng-Da; Liu, Nai-Le; Li, Li; Yang, Tao; Peng, Cheng-Zhi; Zhao, Bo; Chen, Yu-Ao; Pan, Jian-Wei

    2017-10-27

    Quantum repeaters play a significant role in achieving long-distance quantum communication. In the past decades, tremendous effort has been devoted towards constructing a quantum repeater. As one of the crucial elements, entanglement has been created in different memory systems via entanglement swapping. The realization of j-hierarchy entanglement swapping, i.e., connecting quantum memory and further extending the communication distance, is important for implementing a practical quantum repeater. Here, we report the first demonstration of a fault-tolerant two-hierarchy entanglement swapping with linear optics using parametric down-conversion sources. In the experiment, the dominant or most probable noise terms in the one-hierarchy entanglement swapping, which is on the same order of magnitude as the desired state and prevents further entanglement connections, are automatically washed out by a proper design of the detection setting, and the communication distance can be extended. Given suitable quantum memory, our techniques can be directly applied to implementing an atomic ensemble based quantum repeater, and are of significant importance in the scalable quantum information processing.

  1. Timing of repetition suppression of event-related potentials to unattended objects.

    PubMed

    Stefanics, Gabor; Heinzle, Jakob; Czigler, István; Valentini, Elia; Stephan, Klaas Enno

    2018-05-26

    Current theories of object perception emphasize the automatic nature of perceptual inference. Repetition suppression (RS), the successive decrease of brain responses to repeated stimuli, is thought to reflect the optimization of perceptual inference through neural plasticity. While functional imaging studies revealed brain regions that show suppressed responses to the repeated presentation of an object, little is known about the intra-trial time course of repetition effects to everyday objects. Here we used event-related potentials (ERP) to task-irrelevant line-drawn objects, while participants engaged in a distractor task. We quantified changes in ERPs over repetitions using three general linear models (GLM) that modelled RS by an exponential, linear, or categorical "change detection" function in each subject. Our aim was to select the model with highest evidence and determine the within-trial time-course and scalp distribution of repetition effects using that model. Model comparison revealed the superiority of the exponential model indicating that repetition effects are observable for trials beyond the first repetition. Model parameter estimates revealed a sequence of RS effects in three time windows (86-140ms, 322-360ms, and 400-446ms) and with occipital, temporo-parietal, and fronto-temporal distribution, respectively. An interval of repetition enhancement (RE) was also observed (320-340ms) over occipito-temporal sensors. Our results show that automatic processing of task-irrelevant objects involves multiple intervals of RS with distinct scalp topographies. These sequential intervals of RS and RE might reflect the short-term plasticity required for optimization of perceptual inference and the associated changes in prediction errors (PE) and predictions, respectively, over stimulus repetitions during automatic object processing. This article is protected by copyright. All rights reserved. © 2018 The Authors European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  2. Sensitivity Analysis of Repeat Track Estimation Techniques for Detection of Elevation Change in Polar Ice Sheets

    NASA Astrophysics Data System (ADS)

    Harpold, R. E.; Urban, T. J.; Schutz, B. E.

    2008-12-01

    Interest in elevation change detection in the polar regions has increased recently due to concern over the potential sea level rise from the melting of the polar ice caps. Repeat track analysis can be used to estimate elevation change rate by fitting elevation data to model parameters. Several aspects of this method have been tested to improve the recovery of the model parameters. Elevation data from ICESat over Antarctica and Greenland from 2003-2007 are used to test several grid sizes and types, such as grids based on latitude and longitude and grids centered on the ICESat reference groundtrack. Different sets of parameters are estimated, some of which include seasonal terms or alternate types of slopes (linear, quadratic, etc.). In addition, the effects of including crossovers and other solution constraints are evaluated. Simulated data are used to infer potential errors due to unmodeled parameters.

  3. Consistency evaluation of values of weight, height, and body mass index in Food Intake and Physical Activity of School Children: the quality control of data entry in the computerized system.

    PubMed

    Jesus, Gilmar Mercês de; Assis, Maria Alice Altenburg de; Kupek, Emil; Dias, Lizziane Andrade

    2017-01-01

    The quality control of data entry in computerized questionnaires is an important step in the validation of new instruments. The study assessed the consistency of recorded weight and height on the Food Intake and Physical Activity of School Children (Web-CAAFE) between repeated measures and against directly measured data. Students from the 2nd to the 5th grade (n = 390) had their weight and height directly measured and then filled out the Web-CAAFE. A subsample (n = 92) filled out the Web-CAAFE twice, three hours apart. The analysis included hierarchical linear regression, mixed linear regression model, to evaluate the bias, and intraclass correlation coefficient (ICC), to assess consistency. Univariate linear regression assessed the effect of gender, reading/writing performance, and computer/internet use and possession on residuals of fixed and random effects. The Web-CAAFE showed high values of ICC between repeated measures (body weight = 0.996, height = 0.937, body mass index - BMI = 0.972), and regarding the checked measures (body weight = 0.962, height = 0.882, BMI = 0.828). The difference between means of body weight, height, and BMI directly measured and recorded was 208 g, -2 mm, and 0.238 kg/m², respectively, indicating slight BMI underestimation due to underestimation of weight and overestimation of height. This trend was related to body weight and age. Height and weight data entered in the Web-CAAFE by children were highly correlated with direct measurements and with the repeated entry. The bias found was similar to validation studies of self-reported weight and height in comparison to direct measurements.

  4. The Sensitivity of Arctic Ozone Loss to Polar Stratospheric Cloud Volume and Chlorine and Bromine Loading in a Chemistry and Transport Model

    NASA Technical Reports Server (NTRS)

    Douglass, A. R.; Stolarski, R. S.; Strahan, S. E.; Polansky, B. C.

    2006-01-01

    The sensitivity of Arctic ozone loss to polar stratospheric cloud volume (V(sub PSC)) and chlorine and bromine loading is explored using chemistry and transport models (CTMs). A simulation using multi-decadal output from a general circulation model (GCM) in the Goddard Space Flight Center (GSFC) CTM complements one recycling a single year s GCM output in the Global Modeling Initiative (GMI) CTM. Winter polar ozone loss in the GSFC CTM depends on equivalent effective stratospheric chlorine (EESC) and polar vortex characteristics (temperatures, descent, isolation, polar stratospheric cloud amount). Polar ozone loss in the GMI CTM depends only on changes in EESC as the dynamics repeat annually. The GSFC CTM simulation reproduces a linear relationship between ozone loss and Vpsc derived from observations for 1992 - 2003 which holds for EESC within approx.85% of its maximum (approx.1990 - 2020). The GMI simulation shows that ozone loss varies linearly with EESC for constant, high V(sub PSC).

  5. Charge modeling of ionic polymer-metal composites for dynamic curvature sensing

    NASA Astrophysics Data System (ADS)

    Bahramzadeh, Yousef; Shahinpoor, Mohsen

    2011-04-01

    A curvature sensor based on Ionic Polymer-Metal Composite (IPMC) is proposed and characterized for sensing of curvature variation in structures such as inflatable space structures in which using low power and flexible curvature sensor is of high importance for dynamic monitoring of shape at desired points. The linearity of output signal of sensor for calibration, effect of deflection rate at low frequencies and the phase delay between the output signal and the input deformation of IPMC curvature sensor is investigated. An analytical chemo-electro-mechanical model for charge dynamic of IPMC sensor is presented based on Nernst-Planck partial differential equation which can be used to explain the phenomena observed in experiments. The rate dependency of output signal and phase delay between the applied deformation and sensor signal is studied using the proposed model. The model provides a background for predicting the general characteristics of IPMC sensor. It is shown that IPMC sensor exhibits good linearity, sensitivity, and repeatability for dynamic curvature sensing of inflatable structures.

  6. The French press: a repeatable and high-throughput approach to exercising zebrafish (Danio rerio).

    PubMed

    Usui, Takuji; Noble, Daniel W A; O'Dea, Rose E; Fangmeier, Melissa L; Lagisz, Malgorzata; Hesselson, Daniel; Nakagawa, Shinichi

    2018-01-01

    Zebrafish are increasingly used as a vertebrate model organism for various traits including swimming performance, obesity and metabolism, necessitating high-throughput protocols to generate standardized phenotypic information. Here, we propose a novel and cost-effective method for exercising zebrafish, using a coffee plunger and magnetic stirrer. To demonstrate the use of this method, we conducted a pilot experiment to show that this simple system provides repeatable estimates of maximal swim performance (intra-class correlation [ICC] = 0.34-0.41) and observe that exercise training of zebrafish on this system significantly increases their maximum swimming speed. We propose this high-throughput and reproducible system as an alternative to traditional linear chamber systems for exercising zebrafish and similarly sized fishes.

  7. The French press: a repeatable and high-throughput approach to exercising zebrafish (Danio rerio)

    PubMed Central

    Usui, Takuji; Noble, Daniel W.A.; O’Dea, Rose E.; Fangmeier, Melissa L.; Lagisz, Malgorzata; Hesselson, Daniel

    2018-01-01

    Zebrafish are increasingly used as a vertebrate model organism for various traits including swimming performance, obesity and metabolism, necessitating high-throughput protocols to generate standardized phenotypic information. Here, we propose a novel and cost-effective method for exercising zebrafish, using a coffee plunger and magnetic stirrer. To demonstrate the use of this method, we conducted a pilot experiment to show that this simple system provides repeatable estimates of maximal swim performance (intra-class correlation [ICC] = 0.34–0.41) and observe that exercise training of zebrafish on this system significantly increases their maximum swimming speed. We propose this high-throughput and reproducible system as an alternative to traditional linear chamber systems for exercising zebrafish and similarly sized fishes. PMID:29372124

  8. Exact and near backscattering measurements of the linear depolarisation ratio of various ice crystal habits generated in a laboratory cloud chamber

    NASA Astrophysics Data System (ADS)

    Smith, Helen R.; Connolly, Paul J.; Webb, Ann R.; Baran, Anthony J.

    2016-07-01

    Ice clouds were generated in the Manchester Ice Cloud Chamber (MICC), and the backscattering linear depolarisation ratio, δ, was measured for a variety of habits. To create an assortment of particle morphologies, the humidity in the chamber was varied throughout each experiment, resulting in a range of habits from the pristine to the complex. This technique was repeated at three temperatures: -7 °C, -15 °C and -30 °C, in order to produce both solid and hollow columns, plates, sectored plates and dendrites. A linearly polarised 532 nm continuous wave diode laser was directed through a section of the cloud using a non-polarising 50:50 beam splitter. Measurements of the scattered light were taken at 178°, 179° and 180°, using a Glan-Taylor prism to separate the co- and cross-polarised components. The intensities of these components were measured using two amplified photodetectors and the ratio of the cross- to co-polarised intensities was measured to find the linear depolarisation ratio. In general, it was found that Ray Tracing over-predicts the linear depolarisation ratio. However, by creating more accurate particle models which better represent the internal structure of ice particles, discrepancies between measured and modelled results (based on Ray Tracing) were reduced.

  9. Determining vehicle operating speed and lateral position along horizontal curves using linear mixed-effects models.

    PubMed

    Fitzsimmons, Eric J; Kvam, Vanessa; Souleyrette, Reginald R; Nambisan, Shashi S; Bonett, Douglas G

    2013-01-01

    Despite recent improvements in highway safety in the United States, serious crashes on curves remain a significant problem. To assist in better understanding causal factors leading to this problem, this article presents and demonstrates a methodology for collection and analysis of vehicle trajectory and speed data for rural and urban curves using Z-configured road tubes. For a large number of vehicle observations at 2 horizontal curves located in Dexter and Ames, Iowa, the article develops vehicle speed and lateral position prediction models for multiple points along these curves. Linear mixed-effects models were used to predict vehicle lateral position and speed along the curves as explained by operational, vehicle, and environmental variables. Behavior was visually represented for an identified subset of "risky" drivers. Linear mixed-effect regression models provided the means to predict vehicle speed and lateral position while taking into account repeated observations of the same vehicle along horizontal curves. Speed and lateral position at point of entry were observed to influence trajectory and speed profiles. Rural horizontal curve site models are presented that indicate that the following variables were significant and influenced both vehicle speed and lateral position: time of day, direction of travel (inside or outside lane), and type of vehicle.

  10. A primer for biomedical scientists on how to execute model II linear regression analysis.

    PubMed

    Ludbrook, John

    2012-04-01

    1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.

  11. Smart Materials and Structures-Smart Wing. Volumes 1, 2, 3 and 4

    DTIC Science & Technology

    1998-12-01

    repeatable fashion when heat is applied. Therefore, once the pre-twist is successfully applied and the tube is installed in the model, heating the...modules were operated and calibrated online by the PSI 8400 Control System. Because the transducer modules are extremely sensitive to temperature, a...again substantiates that adaptive features tend to support each other, though not necessarily in a completely linear fashion , and essentially provide a

  12. Measurement system analysis of viscometers used for drilling mud characterization

    NASA Astrophysics Data System (ADS)

    Mat-Shayuti, M. S.; Adzhar, S. N.

    2017-07-01

    Viscometers in the Faculty of Chemical Engineering, University Teknologi MARA, are subject to heavy utilization from the members of the faculty. Due to doubts surrounding their result integrity and maintenance management, Measurement System Analysis was executed. 5 samples of drilling muds with varied barite content from 5 - 25 weight% were prepared and their rheological properties determined in 3 trials by 3 operators using the viscometers. Gage Linearity and Bias Study were performed using Minitab software and the result shows high biases in the range of 19.2% to 38.7%, with non-linear trend along the span of measurements. Gage Repeatability & Reproducibility (Nested) analysis later produces Percent Repeatability & Reproducibility more than 7.7% and Percent Tolerance above 30%. Lastly, good and marginal Distinct Categories output are seen among the results. Despite acceptable performance of the measurement system in Distinct Categories, the poor results in accuracy, linearity, and Percent Repeatability & Reproducibility render the gage generally not capable. Improvement to the measurement system is imminent.

  13. Linearity, Bias, and Precision of Hepatic Proton Density Fat Fraction Measurements by Using MR Imaging: A Meta-Analysis.

    PubMed

    Yokoo, Takeshi; Serai, Suraj D; Pirasteh, Ali; Bashir, Mustafa R; Hamilton, Gavin; Hernando, Diego; Hu, Houchun H; Hetterich, Holger; Kühn, Jens-Peter; Kukuk, Guido M; Loomba, Rohit; Middleton, Michael S; Obuchowski, Nancy A; Song, Ji Soo; Tang, An; Wu, Xinhuai; Reeder, Scott B; Sirlin, Claude B

    2018-02-01

    Purpose To determine the linearity, bias, and precision of hepatic proton density fat fraction (PDFF) measurements by using magnetic resonance (MR) imaging across different field strengths, imager manufacturers, and reconstruction methods. Materials and Methods This meta-analysis was performed in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A systematic literature search identified studies that evaluated the linearity and/or bias of hepatic PDFF measurements by using MR imaging (hereafter, MR imaging-PDFF) against PDFF measurements by using colocalized MR spectroscopy (hereafter, MR spectroscopy-PDFF) or the precision of MR imaging-PDFF. The quality of each study was evaluated by using the Quality Assessment of Studies of Diagnostic Accuracy 2 tool. De-identified original data sets from the selected studies were pooled. Linearity was evaluated by using linear regression between MR imaging-PDFF and MR spectroscopy-PDFF measurements. Bias, defined as the mean difference between MR imaging-PDFF and MR spectroscopy-PDFF measurements, was evaluated by using Bland-Altman analysis. Precision, defined as the agreement between repeated MR imaging-PDFF measurements, was evaluated by using a linear mixed-effects model, with field strength, imager manufacturer, reconstruction method, and region of interest as random effects. Results Twenty-three studies (1679 participants) were selected for linearity and bias analyses and 11 studies (425 participants) were selected for precision analyses. MR imaging-PDFF was linear with MR spectroscopy-PDFF (R 2 = 0.96). Regression slope (0.97; P < .001) and mean Bland-Altman bias (-0.13%; 95% limits of agreement: -3.95%, 3.40%) indicated minimal underestimation by using MR imaging-PDFF. MR imaging-PDFF was precise at the region-of-interest level, with repeatability and reproducibility coefficients of 2.99% and 4.12%, respectively. Field strength, imager manufacturer, and reconstruction method each had minimal effects on reproducibility. Conclusion MR imaging-PDFF has excellent linearity, bias, and precision across different field strengths, imager manufacturers, and reconstruction methods. © RSNA, 2017 Online supplemental material is available for this article. An earlier incorrect version of this article appeared online. This article was corrected on October 2, 2017.

  14. Versatile communication strategies among tandem WW domain repeats

    PubMed Central

    Dodson, Emma Joy; Fishbain-Yoskovitz, Vered; Rotem-Bamberger, Shahar

    2015-01-01

    Interactions mediated by short linear motifs in proteins play major roles in regulation of cellular homeostasis since their transient nature allows for easy modulation. We are still far from a full understanding and appreciation of the complex regulation patterns that can be, and are, achieved by this type of interaction. The fact that many linear-motif-binding domains occur in tandem repeats in proteins indicates that their mutual communication is used extensively to obtain complex integration of information toward regulatory decisions. This review is an attempt to overview, and classify, different ways by which two and more tandem repeats cooperate in binding to their targets, in the well-characterized family of WW domains and their corresponding polyproline ligands. PMID:25710931

  15. A Kinematic Calibration Process for Flight Robotic Arms

    NASA Technical Reports Server (NTRS)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) robotic arm is ten times more massive than any Mars robotic arm before it, yet with similar accuracy and repeatability positioning requirements. In order to assess and validate these requirements, a higher-fidelity model and calibration processes were needed. Kinematic calibration of robotic arms is a common and necessary process to ensure good positioning performance. Most methodologies assume a rigid arm, high-accuracy data collection, and some kind of optimization of kinematic parameters. A new detailed kinematic and deflection model of the MSL robotic arm was formulated in the design phase and used to update the initial positioning and orientation accuracy and repeatability requirements. This model included a higher-fidelity link stiffness matrix representation, as well as a link level thermal expansion model. In addition, it included an actuator backlash model. Analytical results highlighted the sensitivity of the arm accuracy to its joint initialization methodology. Because of this, a new technique for initializing the arm joint encoders through hardstop calibration was developed. This involved selecting arm configurations to use in Earth-based hardstop calibration that had corresponding configurations on Mars with the same joint torque to ensure repeatability in the different gravity environment. The process used to collect calibration data for the arm included the use of multiple weight stand-in turrets with enough metrology targets to reconstruct the full six-degree-of-freedom location of the rover and tool frames. The follow-on data processing of the metrology data utilized a standard differential formulation and linear parameter optimization technique.

  16. Readout models for BaFBr0.85I0.15:Eu image plates

    NASA Astrophysics Data System (ADS)

    Stoeckl, M.; Solodov, A. A.

    2018-06-01

    The linearity of the photostimulated luminescence process makes repeated image-plate scanning a viable technique to extract a more dynamic range. In order to obtain a response estimate, two semi-empirical models for the readout fading of an image plate are introduced; they relate the depth distribution of activated photostimulated luminescence centers within an image plate to the recorded signal. Model parameters are estimated from image-plate scan series with BAS-MS image plates and the Typhoon FLA 7000 scanner for the hard x-ray image-plate diagnostic over a collection of experiments providing x-ray energy spectra whose approximate shape is a double exponential.

  17. Modelling variability in lymphatic filariasis: macrofilarial dynamics in the Brugia pahangi--cat model.

    PubMed

    Michael, E; Grenfell, B T; Isham, V S; Denham, D A; Bundy, D A

    1998-01-22

    A striking feature of lymphatic filariasis is the considerable heterogeneity in infection burden observed between hosts, which greatly complicates the analysis of the population dynamics of the disease. Here, we describe the first application of the moment closure equation approach to model the sources and the impact of this heterogeneity for macrofilarial population dynamics. The analysis is based on the closest laboratory equivalent of the life cycle and immunology of infection in humans--cats chronically infected with the filarial nematode Brugia pahangi. Two sets of long-term experiments are analysed: hosts given either single primary infections or given repeat infections. We begin by quantifying changes in the mean and aggregation of adult parasites (inversely measured by the negative binomial parameter, kappa in cohorts of hosts using generalized linear models. We then apply simple stochastic models to interpret observed patterns. The models and empirical data indicate that parasite aggregation tracks the decline in the mean burden with host age in primary infections. Conversely, in repeat infections, aggregation increases as the worm burden declines with experience of infection. The results show that the primary infection variability is consistent with heterogeneities in parasite survival between hosts. By contrast, the models indicate that the reduction in parasite variability with time in repeat infections is most likely due to the 'filtering' effect of a strong, acquired immune response, which gradually acts to remove the initial variability generated by heterogeneities in larval mortality. We discuss this result in terms of the homogenizing effect of host immunity-driven density-dependence on macrofilarial burden in older hosts.

  18. Automated computation of femoral angles in dogs from three-dimensional computed tomography reconstructions: Comparison with manual techniques.

    PubMed

    Longo, F; Nicetto, T; Banzato, T; Savio, G; Drigo, M; Meneghello, R; Concheri, G; Isola, M

    2018-02-01

    The aim of this ex vivo study was to test a novel three-dimensional (3D) automated computer-aided design (CAD) method (aCAD) for the computation of femoral angles in dogs from 3D reconstructions of computed tomography (CT) images. The repeatability and reproducibility of three manual radiography, manual CT reconstructions and the aCAD method for the measurement of three femoral angles were evaluated: (1) anatomical lateral distal femoral angle (aLDFA); (2) femoral neck angle (FNA); and (3) femoral torsion angle (FTA). Femoral angles of 22 femurs obtained from 16 cadavers were measured by three blinded observers. Measurements were repeated three times by each observer for each diagnostic technique. Femoral angle measurements were analysed using a mixed effects linear model for repeated measures to determine the levels of intra-observer agreement (repeatability) and inter-observer agreement (reproducibility). Repeatability and reproducibility of measurements using the aCAD method were excellent (intra-class coefficients, ICCs≥0.98) for all three angles assessed. Manual radiography and CT exhibited excellent agreement for the aLDFA measurement (ICCs≥0.90). However, FNA repeatability and reproducibility were poor (ICCs<0.8), whereas FTA measurement showed slightly higher ICCs values, except for the radiographic reproducibility, which was poor (ICCs<0.8). The computation of the 3D aCAD method provided the highest repeatability and reproducibility among the tested methodologies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. The Ability of American Football Helmets to Manage Linear Acceleration With Repeated High-Energy Impacts.

    PubMed

    Cournoyer, Janie; Post, Andrew; Rousseau, Philippe; Hoshizaki, Blaine

    2016-03-01

    Football players can receive up to 1400 head impacts per season, averaging 6.3 impacts per practice and 14.3 impacts per game. A decrease in the capacity of a helmet to manage linear acceleration with multiple impacts could increase the risk of traumatic brain injury. To investigate the ability of football helmets to manage linear acceleration with multiple high-energy impacts. Descriptive laboratory study. Laboratory. We collected linear-acceleration data for 100 impacts at 6 locations on 4 helmets of different models currently used in football. Impacts 11 to 20 were compared with impacts 91 to 100 for each of the 6 locations. Linear acceleration was greater after multiple impacts (91-100) than after the first few impacts (11-20) for the front, front-boss, rear, and top locations. However, these differences are not clinically relevant as they do not affect the risk for head injury. American football helmet performance deteriorated with multiple impacts, but this is unlikely to be a factor in head-injury causation during a game or over a season.

  20. 3D Geometrical Inspection of Complex Geometry Parts Using a Novel Laser Triangulation Sensor and a Robot

    PubMed Central

    Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge

    2011-01-01

    This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly “coupled” as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a “zero” or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy. PMID:22346569

  1. Non-linear dynamical classification of short time series of the rössler system in high noise regimes.

    PubMed

    Lainscsek, Claudia; Weyhenmeyer, Jonathan; Hernandez, Manuel E; Poizner, Howard; Sejnowski, Terrence J

    2013-01-01

    Time series analysis with delay differential equations (DDEs) reveals non-linear properties of the underlying dynamical system and can serve as a non-linear time-domain classification tool. Here global DDE models were used to analyze short segments of simulated time series from a known dynamical system, the Rössler system, in high noise regimes. In a companion paper, we apply the DDE model developed here to classify short segments of encephalographic (EEG) data recorded from patients with Parkinson's disease and healthy subjects. Nine simulated subjects in each of two distinct classes were generated by varying the bifurcation parameter b and keeping the other two parameters (a and c) of the Rössler system fixed. All choices of b were in the chaotic parameter range. We diluted the simulated data using white noise ranging from 10 to -30 dB signal-to-noise ratios (SNR). Structure selection was supervised by selecting the number of terms, delays, and order of non-linearity of the model DDE model that best linearly separated the two classes of data. The distances d from the linear dividing hyperplane was then used to assess the classification performance by computing the area A' under the ROC curve. The selected model was tested on untrained data using repeated random sub-sampling validation. DDEs were able to accurately distinguish the two dynamical conditions, and moreover, to quantify the changes in the dynamics. There was a significant correlation between the dynamical bifurcation parameter b of the simulated data and the classification parameter d from our analysis. This correlation still held for new simulated subjects with new dynamical parameters selected from each of the two dynamical regimes. Furthermore, the correlation was robust to added noise, being significant even when the noise was greater than the signal. We conclude that DDE models may be used as a generalizable and reliable classification tool for even small segments of noisy data.

  2. Non-Linear Dynamical Classification of Short Time Series of the Rössler System in High Noise Regimes

    PubMed Central

    Lainscsek, Claudia; Weyhenmeyer, Jonathan; Hernandez, Manuel E.; Poizner, Howard; Sejnowski, Terrence J.

    2013-01-01

    Time series analysis with delay differential equations (DDEs) reveals non-linear properties of the underlying dynamical system and can serve as a non-linear time-domain classification tool. Here global DDE models were used to analyze short segments of simulated time series from a known dynamical system, the Rössler system, in high noise regimes. In a companion paper, we apply the DDE model developed here to classify short segments of encephalographic (EEG) data recorded from patients with Parkinson’s disease and healthy subjects. Nine simulated subjects in each of two distinct classes were generated by varying the bifurcation parameter b and keeping the other two parameters (a and c) of the Rössler system fixed. All choices of b were in the chaotic parameter range. We diluted the simulated data using white noise ranging from 10 to −30 dB signal-to-noise ratios (SNR). Structure selection was supervised by selecting the number of terms, delays, and order of non-linearity of the model DDE model that best linearly separated the two classes of data. The distances d from the linear dividing hyperplane was then used to assess the classification performance by computing the area A′ under the ROC curve. The selected model was tested on untrained data using repeated random sub-sampling validation. DDEs were able to accurately distinguish the two dynamical conditions, and moreover, to quantify the changes in the dynamics. There was a significant correlation between the dynamical bifurcation parameter b of the simulated data and the classification parameter d from our analysis. This correlation still held for new simulated subjects with new dynamical parameters selected from each of the two dynamical regimes. Furthermore, the correlation was robust to added noise, being significant even when the noise was greater than the signal. We conclude that DDE models may be used as a generalizable and reliable classification tool for even small segments of noisy data. PMID:24379798

  3. Non-linearity of the collagen triple helix in solution and implications for collagen function.

    PubMed

    Walker, Kenneth T; Nan, Ruodan; Wright, David W; Gor, Jayesh; Bishop, Anthony C; Makhatadze, George I; Brodsky, Barbara; Perkins, Stephen J

    2017-06-16

    Collagen adopts a characteristic supercoiled triple helical conformation which requires a repeating (Xaa-Yaa-Gly) n sequence. Despite the abundance of collagen, a combined experimental and atomistic modelling approach has not so far quantitated the degree of flexibility seen experimentally in the solution structures of collagen triple helices. To address this question, we report an experimental study on the flexibility of varying lengths of collagen triple helical peptides, composed of six, eight, ten and twelve repeats of the most stable Pro-Hyp-Gly (POG) units. In addition, one unblocked peptide, (POG) 10unblocked , was compared with the blocked (POG) 10 as a control for the significance of end effects. Complementary analytical ultracentrifugation and synchrotron small angle X-ray scattering data showed that the conformations of the longer triple helical peptides were not well explained by a linear structure derived from crystallography. To interpret these data, molecular dynamics simulations were used to generate 50 000 physically realistic collagen structures for each of the helices. These structures were fitted against their respective scattering data to reveal the best fitting structures from this large ensemble of possible helix structures. This curve fitting confirmed a small degree of non-linearity to exist in these best fit triple helices, with the degree of bending approximated as 4-17° from linearity. Our results open the way for further studies of other collagen triple helices with different sequences and stabilities in order to clarify the role of molecular rigidity and flexibility in collagen extracellular and immune function and disease. © 2017 The Author(s).

  4. Methodological quality and reporting of generalized linear mixed models in clinical medicine (2000-2012): a systematic review.

    PubMed

    Casals, Martí; Girabent-Farrés, Montserrat; Carrasco, Josep L

    2014-01-01

    Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs) in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic "generalized linear mixed models","hierarchical generalized linear models", "multilevel generalized linear model" and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64) or Poisson (n = 22). Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%). The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the quality of reporting has room for improvement regarding the characteristics of the analysis, estimation method, validation, and selection of the model.

  5. Nonlinear Growth Curves in Developmental Research

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam; Hamagami, Fumiaki

    2011-01-01

    Developmentalists are often interested in understanding change processes and growth models are the most common analytic tool for examining such processes. Nonlinear growth curves are especially valuable to developmentalists because the defining characteristics of the growth process such as initial levels, rates of change during growth spurts, and asymptotic levels can be estimated. A variety of growth models are described beginning with the linear growth model and moving to nonlinear models of varying complexity. A detailed discussion of nonlinear models is provided, highlighting the added insights into complex developmental processes associated with their use. A collection of growth models are fit to repeated measures of height from participants of the Berkeley Growth and Guidance Studies from early childhood through adulthood. PMID:21824131

  6. Tensorial Calibration. 2. Second Order Tensorial Calibration.

    DTIC Science & Technology

    1987-10-12

    index is repeated more than once only in one side of an equation, it implies a summation over the index valid range. 12 To avoid confusion of terms...and higher order tensor, the rank can be higher than the maximum dimensionality. 13 ,ON 6 LINEAR SECOND ORDER TENSORIAL CALIBRATION MODEL From...these equations are valid only if all the elements of the diagonal matrix B3 are non-zero because its inverse (-1) must be computed. This implies that M

  7. Genomic selection for slaughter age in pigs using the Cox frailty model.

    PubMed

    Santos, V S; Martins Filho, S; Resende, M D V; Azevedo, C F; Lopes, P S; Guimarães, S E F; Glória, L S; Silva, F F

    2015-10-19

    The aim of this study was to compare genomic selection methodologies using a linear mixed model and the Cox survival model. We used data from an F2 population of pigs, in which the response variable was the time in days from birth to the culling of the animal and the covariates were 238 markers [237 single nucleotide polymorphism (SNP) plus the halothane gene]. The data were corrected for fixed effects, and the accuracy of the method was determined based on the correlation of the ranks of predicted genomic breeding values (GBVs) in both models with the corrected phenotypic values. The analysis was repeated with a subset of SNP markers with largest absolute effects. The results were in agreement with the GBV prediction and the estimation of marker effects for both models for uncensored data and for normality. However, when considering censored data, the Cox model with a normal random effect (S1) was more appropriate. Since there was no agreement between the linear mixed model and the imputed data (L2) for the prediction of genomic values and the estimation of marker effects, the model S1 was considered superior as it took into account the latent variable and the censored data. Marker selection increased correlations between the ranks of predicted GBVs by the linear and Cox frailty models and the corrected phenotypic values, and 120 markers were required to increase the predictive ability for the characteristic analyzed.

  8. Hierarchical Bayes approach for subgroup analysis.

    PubMed

    Hsu, Yu-Yi; Zalkikar, Jyoti; Tiwari, Ram C

    2017-01-01

    In clinical data analysis, both treatment effect estimation and consistency assessment are important for a better understanding of the drug efficacy for the benefit of subjects in individual subgroups. The linear mixed-effects model has been used for subgroup analysis to describe treatment differences among subgroups with great flexibility. The hierarchical Bayes approach has been applied to linear mixed-effects model to derive the posterior distributions of overall and subgroup treatment effects. In this article, we discuss the prior selection for variance components in hierarchical Bayes, estimation and decision making of the overall treatment effect, as well as consistency assessment of the treatment effects across the subgroups based on the posterior predictive p-value. Decision procedures are suggested using either the posterior probability or the Bayes factor. These decision procedures and their properties are illustrated using a simulated example with normally distributed response and repeated measurements.

  9. [Radiotherapy and chaos theory: the tit bird and the butterfly...].

    PubMed

    Denis, F; Letellier, C

    2012-09-01

    Although the same simple laws govern cancer outcome (cell division repeated again and again), each tumour has a different outcome before as well as after irradiation therapy. The linear-quadratic radiosensitivity model allows an assessment of tumor sensitivity to radiotherapy. This model presents some limitations in clinical practice because it does not take into account the interactions between tumour cells and non-tumoral bystander cells (such as endothelial cells, fibroblasts, immune cells...) that modulate radiosensitivity and tumor growth dynamics. These interactions can lead to non-linear and complex tumor growth which appears to be random but that is not since there is not so many tumors spontaneously regressing. In this paper we propose to develop a deterministic approach for tumour growth dynamics using chaos theory. Various characteristics of cancer dynamics and tumor radiosensitivity can be explained using mathematical models of competing cell species. Copyright © 2012 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  10. A Repeated Trajectory Class Model for Intensive Longitudinal Categorical Outcome

    PubMed Central

    Lin, Haiqun; Han, Ling; Peduzzi, Peter N.; Murphy, Terrence E.; Gill, Thomas M.; Allore, Heather G.

    2014-01-01

    This paper presents a novel repeated latent class model for a longitudinal response that is frequently measured as in our prospective study of older adults with monthly data on activities of daily living (ADL) for more than ten years. The proposed method is especially useful when the longitudinal response is measured much more frequently than other relevant covariates. The repeated trajectory classes represent distinct temporal patterns of the longitudinal response wherein an individual’s membership in the trajectory classes may renew or change over time. Within a trajectory class, the longitudinal response is modeled by a class-specific generalized linear mixed model. Effectively, an individual may remain in a trajectory class or switch to another as the class membership predictors are updated periodically over time. The identification of a common set of trajectory classes allows changes among the temporal patterns to be distinguished from local fluctuations in the response. An informative event such as death is jointly modeled by class-specific probability of the event through shared random effects. We do not impose the conditional independence assumption given the classes. The method is illustrated by analyzing the change over time in ADL trajectory class among 754 older adults with 70500 person-months of follow-up in the Precipitating Events Project. We also investigate the impact of jointly modeling the class-specific probability of the event on the parameter estimates in a simulation study. The primary contribution of our paper is the periodic updating of trajectory classes for a longitudinal categorical response without assuming conditional independence. PMID:24519416

  11. More Lessons from a Master Teacher! Frank Wachowiak.

    ERIC Educational Resources Information Center

    Morris, Jimmy

    1989-01-01

    Presents two printmaking lessons for children inspired by master art teacher, Frank Wachowiak. "Repeated Motifs and Designs" uses vegetables and found objects to make prints emphasizing repeat patterns. "Fish Under the Sea" uses white liquid glue to make line prints with strong linear compositions. (LS)

  12. Investigation of electrostatic behavior of a lactose carrier for dry powder inhalers.

    PubMed

    Chow, Keat Theng; Zhu, Kewu; Tan, Reginald B H; Heng, Paul W S

    2008-12-01

    This study aims to elucidate the electrostatic behavior of a model lactose carrier used in dry powder inhaler formulations by examining the effects of ambient relative humidity (RH), aerosolization air flow rate, repeated inhaler use, gelatin capsule and tapping on the specific charge (nC/g) of bulk and aerosolized lactose. Static and dynamic electrostatic charge measurements were performed using a Faraday cage connected to an electrometer. Experiments were conducted inside a walk-in environmental chamber at 25 degrees C and RHs of 20% to 80%. Aerosolization was achieved using air flow rates of 30, 45, 60 and 75 L/min. The initial charges of the bulk and capsulated lactose were a magnitude lower than the charges of tapped or aerosolized lactose. Dynamic charge increased linearly with aerosolization air flow rate and RH. Greater frictional forces at higher air flow rate induced higher electrostatic charges. Increased RH enhanced charge generation. Repeated inhaler use significantly influenced electrostatic charge due to repeated usage. This study demonstrated the significance of interacting influences by variables commonly encountered in the use DPI such as variation in patient's inspiratory flow rate, ambient RH and repeated inhaler use on the electrostatic behavior of a lactose DPI carrier.

  13. Modeling Diverse Pathways to Age Progressive Volcanism in Subduction Zones.

    NASA Astrophysics Data System (ADS)

    Kincaid, C. R.; Szwaja, S.; Sylvia, R. T.; Druken, K. A.

    2015-12-01

    One of the best, and most challenging clues to unraveling mantle circulation patterns in subduction zones comes in the form of age progressive volcanic and geochemical trends. Hard fought geological data from many subduction zones, like Tonga-Lau, the Cascades and Costa-Rica/Nicaragua, reveal striking temporal patterns used in defining mantle flow directions and rates. We summarize results from laboratory subduction models showing a range in circulation and thermal-chemical transport processes. These interaction styles are capable of producing such trends, often reflecting apparent instead of actual mantle velocities. Lab experiments use a glucose working fluid to represent Earth's upper mantle and kinematically driven plates to produce a range in slab sinking and related wedge transport patterns. Kinematic forcing assumes most of the super-adiabatic temperature gradient available to drive major downwellings is in the tabular slabs. Moreover, sinking styles for fully dynamic subduction depend on many complicating factors that are only poorly understood and which can vary widely even for repeated parameter combinations. Kinematic models have the benefit of precise, repeatable control of slab motions and wedge flow responses. Results generated with these techniques show the evolution of near-surface thermal-chemical-rheological heterogeneities leads to age progressive surface expressions in a variety of ways. One set of experiments shows that rollback and back-arc extension combine to produce distinct modes of linear, age progressive melt delivery to the surface through a) erosion of the rheological boundary layer beneath the overriding plate, and deformation and redistribution of both b) mantle residuum produced from decompression melting and c) formerly active, buoyant plumes. Additional experiments consider buoyant diapirs rising in a wedge under the influence of rollback, back-arc spreading and slab-gaps. Strongly deflected diapirs, experiencing variable rise rates, also commonly surface as linear, age progressive tracks. Applying these results to systems like the Cascades and Tonga-Lau suggest there are multiple ways to produce timing trends due both to linear flows and waves of heterogeneity obliquely impacting surface plates.

  14. Zero-determinant strategies in finitely repeated games.

    PubMed

    Ichinose, Genki; Masuda, Naoki

    2018-02-07

    Direct reciprocity is a mechanism for sustaining mutual cooperation in repeated social dilemma games, where a player would keep cooperation to avoid being retaliated by a co-player in the future. So-called zero-determinant (ZD) strategies enable a player to unilaterally set a linear relationship between the player's own payoff and the co-player's payoff regardless of the strategy of the co-player. In the present study, we analytically study zero-determinant strategies in finitely repeated (two-person) prisoner's dilemma games with a general payoff matrix. Our results are as follows. First, we present the forms of solutions that extend the known results for infinitely repeated games (with a discount factor w of unity) to the case of finitely repeated games (0 < w < 1). Second, for the three most prominent ZD strategies, the equalizers, extortioners, and generous strategies, we derive the threshold value of w above which the ZD strategies exist. Third, we show that the only strategies that enforce a linear relationship between the two players' payoffs are either the ZD strategies or unconditional strategies, where the latter independently cooperates with a fixed probability in each round of the game, proving a conjecture previously made for infinitely repeated games. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Troposphere gradients from the ECMWF in VLBI analysis

    NASA Astrophysics Data System (ADS)

    Boehm, Johannes; Schuh, Harald

    2007-06-01

    Modeling path delays in the neutral atmosphere for the analysis of Very Long Baseline Interferometry (VLBI) observations has been improved significantly in recent years by the use of elevation-dependent mapping functions based on data from numerical weather models. In this paper, we present a fast way of extracting both, hydrostatic and wet, linear horizontal gradients for the troposphere from data of the European Centre for Medium-range Weather Forecasts (ECMWF) model, as it is realized at the Vienna University of Technology on a routine basis for all stations of the International GNSS (Global Navigation Satellite Systems) Service (IGS) and International VLBI Service for Geodesy and Astrometry (IVS) stations. This approach only uses information about the refractivity gradients at the site vertical, but no information from the line-of-sight. VLBI analysis of the CONT02 and CONT05 campaigns, as well as all IVS-R1 and IVS-R4 sessions in the first half of 2006, shows that fixing these a priori gradients improves the repeatability for 74% (40 out of 54) of the VLBI baseline lengths compared to fixing zero or constant a priori gradients, and improves the repeatability for the majority of baselines compared to estimating 24-h offsets for the gradients. Only if 6-h offsets are estimated, the baseline length repeatabilities significantly improve, no matter which a priori gradients are used.

  16. Inmate responses to prison-based drug treatment: a repeated measures analysis.

    PubMed

    Welsh, Wayne N

    2010-06-01

    Using a sample of 347 prison inmates and general linear modeling (GLM) repeated measures analyses, this paper examined during-treatment responses (e.g., changes in psychological and social functioning) to prison-based TC drug treatment. These effects have rarely been examined in previous studies, and never with a fully multivariate model accounting for within-subjects effects (changes over time), between-subjects effects (e.g., levels of risk and motivation), and within/between-subjects interactions (timexriskxmotivation). The results provide evidence of positive inmate change in response to prison TC treatment, but the patterns of results varied depending upon: (a) specific indicators of psychological and social functioning, motivation, and treatment process; (b) the time periods examined (1, 6, and 12 months during treatment); and (c) baseline levels of risk and motivation. Significant interactions between time and type of inmate suggest important new directions for research, theory, and practice in offender-based substance abuse treatment. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  17. The dual-effects model of social control revisited: relationship satisfaction as a moderator.

    PubMed

    Knoll, Nina; Burkert, Silke; Scholz, Urte; Roigas, Jan; Gralla, Oliver

    2012-05-01

    The dual-effects model of social control states that receiving social control leads to better health behavior, but also enhances distress in the control recipient. Associated findings, however, are inconsistent. In this study we investigated the role of relationship satisfaction as a moderator of associations of received spousal control with health behavior and affect. In a study with five waves of assessment spanning two weeks to one year following radical prostatectomy (RP), N=109 married or cohabiting prostate-cancer patients repeatedly reported on their pelvic-floor exercise (PFE) to control postsurgery urinary incontinence and affect as primary outcomes, on received PFE-specific spousal control, relationship satisfaction, and covariates. Findings from two-level hierarchical linear models with repeated assessments nested in individuals suggested significant interactions of received spousal control with relationship satisfaction predicting patients' concurrent PFE and positive affect. Patients who were happy with their relationships seemed to benefit from spousal control regarding regular PFE postsurgery while patients less satisfied with their relationships did not. In addition, the latter reported lower levels of positive affect when receiving much spousal control. Results indicate the utility of the inclusion of relationship satisfaction as a moderator of the dual-effects model of social control.

  18. Numerical Calculations of 3-D High-Lift Flows and Comparison with Experiment

    NASA Technical Reports Server (NTRS)

    Compton, William B, III

    2015-01-01

    Solutions were obtained with the Navier-Stokes CFD code TLNS3D to predict the flow about the NASA Trapezoidal Wing, a high-lift wing composed of three elements: the main-wing element, a deployed leading-edge slat, and a deployed trailing-edge flap. Turbulence was modeled by the Spalart-Allmaras one-equation turbulence model. One case with massive separation was repeated using Menter's two-equation SST (Menter's Shear Stress Transport) k-omega turbulence model in an attempt to improve the agreement with experiment. The investigation was conducted at a free stream Mach number of 0.2, and at angles of attack ranging from 10.004 degrees to 34.858 degrees. The Reynolds number based on the mean aerodynamic chord of the wing was 4.3 x 10 (sup 6). Compared to experiment, the numerical procedure predicted the surface pressures very well at angles of attack in the linear range of the lift. However, computed maximum lift was 5% low. Drag was mainly under predicted. The procedure correctly predicted several well-known trends and features of high-lift flows, such as off-body separation. The two turbulence models yielded significantly different solutions for the repeated case.

  19. Mapping of the DLQI scores to EQ-5D utility values using ordinal logistic regression.

    PubMed

    Ali, Faraz Mahmood; Kay, Richard; Finlay, Andrew Y; Piguet, Vincent; Kupfer, Joerg; Dalgard, Florence; Salek, M Sam

    2017-11-01

    The Dermatology Life Quality Index (DLQI) and the European Quality of Life-5 Dimension (EQ-5D) are separate measures that may be used to gather health-related quality of life (HRQoL) information from patients. The EQ-5D is a generic measure from which health utility estimates can be derived, whereas the DLQI is a specialty-specific measure to assess HRQoL. To reduce the burden of multiple measures being administered and to enable a more disease-specific calculation of health utility estimates, we explored an established mathematical technique known as ordinal logistic regression (OLR) to develop an appropriate model to map DLQI data to EQ-5D-based health utility estimates. Retrospective data from 4010 patients were randomly divided five times into two groups for the derivation and testing of the mapping model. Split-half cross-validation was utilized resulting in a total of ten ordinal logistic regression models for each of the five EQ-5D dimensions against age, sex, and all ten items of the DLQI. Using Monte Carlo simulation, predicted health utility estimates were derived and compared against those observed. This method was repeated for both OLR and a previously tested mapping methodology based on linear regression. The model was shown to be highly predictive and its repeated fitting demonstrated a stable model using OLR as well as linear regression. The mean differences between OLR-predicted health utility estimates and observed health utility estimates ranged from 0.0024 to 0.0239 across the ten modeling exercises, with an average overall difference of 0.0120 (a 1.6% underestimate, not of clinical importance). This modeling framework developed in this study will enable researchers to calculate EQ-5D health utility estimates from a specialty-specific study population, reducing patient and economic burden.

  20. How to improve breeding value prediction for feed conversion ratio in the case of incomplete longitudinal body weights.

    PubMed

    Tran, V H Huynh; Gilbert, H; David, I

    2017-01-01

    With the development of automatic self-feeders, repeated measurements of feed intake are becoming easier in an increasing number of species. However, the corresponding BW are not always recorded, and these missing values complicate the longitudinal analysis of the feed conversion ratio (FCR). Our aim was to evaluate the impact of missing BW data on estimations of the genetic parameters of FCR and ways to improve the estimations. On the basis of the missing BW profile in French Large White pigs (male pigs weighed weekly, females and castrated males weighed monthly), we compared 2 different ways of predicting missing BW, 1 using a Gompertz model and 1 using a linear interpolation. For the first part of the study, we used 17,398 weekly records of BW and feed intake recorded over 16 consecutive weeks in 1,222 growing male pigs. We performed a simulation study on this data set to mimic missing BW values according to the pattern of weekly proportions of incomplete BW data in females and castrated males. The FCR was then computed for each week using observed data (obser_FCR), data with missing BW (miss_FCR), data with BW predicted using a Gompertz model (Gomp_FCR), and data with BW predicted by linear interpolation (interp_FCR). Heritability (h) was estimated, and the EBV was predicted for each repeated FCR using a random regression model. In the second part of the study, the full data set (males with their complete BW records, castrated males and females with missing BW) was analyzed using the same methods (miss_FCR, Gomp_FCR, and interp_FCR). Results of the simulation study showed that h were overestimated in the case of missing BW and that predicting BW using a linear interpolation provided a more accurate estimation of h and of EBV than a Gompertz model. Over 100 simulations, the correlation between obser_EBV and interp_EBV, Gomp_EBV, and miss_EBV was 0.93 ± 0.02, 0.91 ± 0.01, and 0.79 ± 0.04, respectively. The heritabilities obtained with the full data set were quite similar for miss_FCR, Gomp_FCR, and interp_FCR. In conclusion, when the proportion of missing BW is high, genetic parameters of FCR are not well estimated. In French Large White pigs, in the growing period extending from d 65 to 168, prediction of missing BW using a Gompertz growth model slightly improved the estimations, but the linear interpolation improved the estimation to a greater extent. This result is due to the linear rather than sigmoidal increase in BW over the study period.

  1. Chronic Stressors and Adolescents' Externalizing Problems: Genetic Moderation by Dopamine Receptor D4. The TRAILS Study.

    PubMed

    Zandstra, Anna Roos E; Ormel, Johan; Hoekstra, Pieter J; Hartman, Catharina A

    2018-01-01

    The existing literature does not provide consistent evidence that carriers of the Dopamine D4 Receptor 7-repeat allele are more sensitive to adverse environmental influences, resulting in enhanced externalizing problems, compared to noncarriers. One explanation is that the adverse influences examined in prior studies were not severe, chronic, or distressing enough to reveal individual differences in sensitivity reflected by DRD4-7R. This study examined whether the 7-repeat allele moderated the association between chronic stressors capturing multiple stressful aspects of individuals' lives and externalizing problems in adolescence. We expected that chronic stressor levels would be associated with externalizing levels only in 7-repeat carriers. Using Linear Mixed Models, we analyzed data from 1621 Dutch adolescents (52.2% boys), obtained in three measurement waves (mean age approximately 11, 13.5, and 16 years) from the TRacking Adolescents' Individual Lives Survey (TRAILS) population-based birth cohort and the parallel clinic-referred cohort. Across informants, we found that higher levels of chronic stressors were related to higher externalizing levels in 7-repeat carriers but not in noncarriers, as hypothesized. Although previous studies on the 7-repeat allele as a moderator of environmental influences on adolescents' externalizing problems have not convincingly demonstrated individual differences in sensitivity to adverse environmental influences, our findings suggest that adolescent carriers of the Dopamine D4 Receptor 7-repeat allele are more sensitive to chronic, multi-context stressors than noncarriers.

  2. Organometallic macromolecules with piano stool coordination repeating units: chain configuration and stimulated solution behaviour.

    PubMed

    Cao, Kai; Ward, Jonathan; Amos, Ryan C; Jeong, Moon Gon; Kim, Kyoung Taek; Gauthier, Mario; Foucher, Daniel; Wang, Xiaosong

    2014-09-11

    Theoretical calculations illustrate that organometallic macromolecules with piano stool coordination repeating units (Fe-acyl complex) adopt linear chain configuration with a P-Fe-C backbone surrounded by aromatic groups. The macromolecules show molecular weight-dependent and temperature stimulated solution behaviour in DMSO.

  3. Parametrically excited multidegree-of-freedom systems with repeated frequencies

    NASA Astrophysics Data System (ADS)

    Nayfeh, A. H.

    1983-05-01

    An analysis is presented of the linear response of multidegree-of-freedom systems with a repeated frequency of order three to a harmonic parametric excitation. The method of multiple scales is used to determine the modulation of the amplitudes and phases for two cases: fundamental resonance of the modes with the repeated frequency and combination resonance involving these modes and another mode. Conditions are then derived for determining the stability of the motion.

  4. Estimating the reliability of repeatedly measured endpoints based on linear mixed-effects models. A tutorial.

    PubMed

    Van der Elst, Wim; Molenberghs, Geert; Hilgers, Ralf-Dieter; Verbeke, Geert; Heussen, Nicole

    2016-11-01

    There are various settings in which researchers are interested in the assessment of the correlation between repeated measurements that are taken within the same subject (i.e., reliability). For example, the same rating scale may be used to assess the symptom severity of the same patients by multiple physicians, or the same outcome may be measured repeatedly over time in the same patients. Reliability can be estimated in various ways, for example, using the classical Pearson correlation or the intra-class correlation in clustered data. However, contemporary data often have a complex structure that goes well beyond the restrictive assumptions that are needed with the more conventional methods to estimate reliability. In the current paper, we propose a general and flexible modeling approach that allows for the derivation of reliability estimates, standard errors, and confidence intervals - appropriately taking hierarchies and covariates in the data into account. Our methodology is developed for continuous outcomes together with covariates of an arbitrary type. The methodology is illustrated in a case study, and a Web Appendix is provided which details the computations using the R package CorrMixed and the SAS software. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Does Repeated Testing Impact Concordance Between Genital and Self-Reported Sexual Arousal in Women?

    PubMed

    Velten, Julia; Chivers, Meredith L; Brotto, Lori A

    2018-04-01

    Women show a substantial variability in their genital and subjective responses to sexual stimuli. The level of agreement between these two aspects of response is termed sexual concordance and has been increasingly investigated because of its implications for understanding models of sexual response and as a potential endpoint in clinical trials of treatments to improve women's sexual dysfunction. However, interpreting changes in sexual concordance may be problematic because, to date, it still is unclear how repeated testing itself influences sexual concordance in women. We are aware of only one study that evaluated temporal stability of concordance in women, and it found no evidence of stability. However, time stability would be necessary for arguing that concordance is a stable individual difference. The main goal of this study was to investigate the test-retest reliability of sexual concordance in a sample of 30 women with sexual difficulties. Using hierarchical linear modeling, we found that sexual concordance was not influenced by repeated testing 12 weeks later, but showed test-retest reliability suggesting temporal stability. Our findings support the hypothesis that sexual concordance is a relatively stable individual difference and that changes in sexual concordance after treatment or experimental conditions could, therefore, be attributed to effects of those conditions.

  6. Clinical, biological, and skin histopathologic effects of ionic macrocyclic and nonionic linear gadolinium chelates in a rat model of nephrogenic systemic fibrosis.

    PubMed

    Fretellier, Nathalie; Idée, Jean-Marc; Guerret, Sylviane; Hollenbeck, Claire; Hartmann, Daniel; González, Walter; Robic, Caroline; Port, Marc; Corot, Claire

    2011-02-01

    the purpose of this study was to compare the clinical, pathologic, and biochemical effects of repeated administrations of ionic macrocyclic or nonionic linear gadolinium chelates (GC) in rats with impaired renal function. rats submitted to subtotal nephrectomy were allocated to single injections of 2.5 mmol/kg of gadodiamide (nonionic linear chelate), nonformulated gadodiamide (ie, without the free ligand caldiamide), gadoterate (ionic macrocyclic chelate), or saline for 5 consecutive days. Blinded semi-quantitative histopathologic and immunohistochemical examinations of the skin were performed, as well as clinical, hematological, and biochemical follow-up. Rats were killed at day 11. Long-term (up to day 32) follow-up of rats was also performed in an auxiliary study. epidermal lesions (ulcerations and scabs) were found in 4 of the 10 rats treated with nonformulated gadodiamide. Two rats survived the study period. Inflammatory signs were observed in this group. No clinical, hematological, or biochemical signs were observed in the saline and gadoterate- or gadodiamide-treated groups. Plasma fibroblast growth factor-23 levels were significantly higher in the gadodiamide group than in the gadoterate group (day 11). Decreased plasma transferrin-bound iron levels were measured in the nonformulated gadodiamide group. Histologic lesions were in the range: nonformulated gadodiamide (superficial epidermal lesions, inflammation, necrosis, and increased cellularity in papillary dermis) > gadodiamide (small superficial epidermal lesions and signs of degradation of collagen fibers in the dermis) > gadoterate (very few pathologic lesions, similar to control rats). repeated administration of the nonionic linear GC gadodiamide to renally impaired rats is associated with more severe histologic lesions and higher FGF-23 plasma levels than the macrocyclic GC gadoterate.

  7. Formation of Linear Amplicons with Inverted Duplications in Leishmania Requires the MRE11 Nuclease

    PubMed Central

    Laffitte, Marie-Claude N.; Genois, Marie-Michelle; Mukherjee, Angana; Légaré, Danielle; Masson, Jean-Yves; Ouellette, Marc

    2014-01-01

    Extrachromosomal DNA amplification is frequent in the protozoan parasite Leishmania selected for drug resistance. The extrachromosomal amplified DNA is either circular or linear, and is formed at the level of direct or inverted homologous repeated sequences that abound in the Leishmania genome. The RAD51 recombinase plays an important role in circular amplicons formation, but the mechanism by which linear amplicons are formed is unknown. We hypothesized that the Leishmania infantum DNA repair protein MRE11 is required for linear amplicons following rearrangements at the level of inverted repeats. The purified LiMRE11 protein showed both DNA binding and exonuclease activities. Inactivation of the LiMRE11 gene led to parasites with enhanced sensitivity to DNA damaging agents. The MRE11−/− parasites had a reduced capacity to form linear amplicons after drug selection, and the reintroduction of an MRE11 allele led to parasites regaining their capacity to generate linear amplicons, but only when MRE11 had an active nuclease activity. These results highlight a novel MRE11-dependent pathway used by Leishmania to amplify portions of its genome to respond to a changing environment. PMID:25474106

  8. CGG allele size somatic mosaicism and methylation in FMR1 premutation alleles

    PubMed Central

    Pretto, Dalyir I.; Mendoza-Morales, Guadalupe; Lo, Joyce; Cao, Ru; Hadd, Andrew; Latham, Gary J.; Durbin-Johnson, Blythe; Hagerman, Randi; Tassone, Flora

    2014-01-01

    Background Greater than 200 CGG repeats in the 5′UTR of the FMR1 gene leads to epigenetic silencing and lack of the FMR1 protein, causing Fragile X Syndrome. Individuals carriers of a premutation (PM) allele with 55–200 CGG repeats are typically unmethylated and can present with clinical features defined as FMR1 associated conditions. Methods Blood samples from 17 male PM carriers were assessed clinically and molecularly by Southern Blot, Western Blot, PCR and QRT-PCR. Blood and brain tissue from additional 18 PM males were also similarly examined. Continuous outcomes were modeled using linear regression and binary outcomes were modeled using logistic regression. Results Methylated alleles were detected in different fractions of blood cells in all PM cases (n= 17). CGG repeat numbers correlated with percent of methylation and mRNA levels and, especially in the upper PM range, with greater number of clinical involvements. Inter/intra- tissue somatic instability and differences in percent methylation were observed between blood and fibroblasts (n=4) and also observed between blood and different brain regions in three of the 18 premutation cases examined. CGG repeat lengths in lymphocytes remained unchanged over a period of time ranging from 2–6 years, three cases for whom multiple samples were available. Conclusion In addition to CGG size instability, individuals with a PM expanded alleles can exhibit methylation and display more clinical features likely due to RNA toxicity and/or FMR1 silencing. The observed association between CGG repeat length and percent of methylation with the severity of the clinical phenotypes underscores the potential value of methylation in affected PM to further understand penetrance, inform diagnosis and to expand treatment options. PMID:24591415

  9. Huntingtin gene repeat size variations affect risk of lifetime depression.

    PubMed

    Gardiner, Sarah L; van Belzen, Martine J; Boogaard, Merel W; van Roon-Mom, Willeke M C; Rozing, Maarten P; van Hemert, Albert M; Smit, Johannes H; Beekman, Aartjan T F; van Grootheest, Gerard; Schoevers, Robert A; Oude Voshaar, Richard C; Roos, Raymund A C; Comijs, Hannie C; Penninx, Brenda W J H; van der Mast, Roos C; Aziz, N Ahmad

    2017-12-11

    Huntington disease (HD) is a severe neuropsychiatric disorder caused by a cytosine-adenine-guanine (CAG) repeat expansion in the HTT gene. Although HD is frequently complicated by depression, it is still unknown to what extent common HTT CAG repeat size variations in the normal range could affect depression risk in the general population. Using binary logistic regression, we assessed the association between HTT CAG repeat size and depression risk in two well-characterized Dutch cohorts─the Netherlands Study of Depression and Anxiety and the Netherlands Study of Depression in Older Persons─including 2165 depressed and 1058 non-depressed persons. In both cohorts, separately as well as combined, there was a significant non-linear association between the risk of lifetime depression and HTT CAG repeat size in which both relatively short and relatively large alleles were associated with an increased risk of depression (β = -0.292 and β = 0.006 for the linear and the quadratic term, respectively; both P < 0.01 after adjustment for the effects of sex, age, and education level). The odds of lifetime depression were lowest in persons with a HTT CAG repeat size of 21 (odds ratio: 0.71, 95% confidence interval: 0.52 to 0.98) compared to the average odds in the total cohort. In conclusion, lifetime depression risk was higher with both relatively short and relatively large HTT CAG repeat sizes in the normal range. Our study provides important proof-of-principle that repeat polymorphisms can act as hitherto unappreciated but complex genetic modifiers of depression.

  10. Origin-Dependent Inverted-Repeat Amplification: Tests of a Model for Inverted DNA Amplification.

    PubMed

    Brewer, Bonita J; Payen, Celia; Di Rienzi, Sara C; Higgins, Megan M; Ong, Giang; Dunham, Maitreya J; Raghuraman, M K

    2015-12-01

    DNA replication errors are a major driver of evolution--from single nucleotide polymorphisms to large-scale copy number variations (CNVs). Here we test a specific replication-based model to explain the generation of interstitial, inverted triplications. While no genetic information is lost, the novel inversion junctions and increased copy number of the included sequences create the potential for adaptive phenotypes. The model--Origin-Dependent Inverted-Repeat Amplification (ODIRA)-proposes that a replication error at pre-existing short, interrupted, inverted repeats in genomic sequences generates an extrachromosomal, inverted dimeric, autonomously replicating intermediate; subsequent genomic integration of the dimer yields this class of CNV without loss of distal chromosomal sequences. We used a combination of in vitro and in vivo approaches to test the feasibility of the proposed replication error and its downstream consequences on chromosome structure in the yeast Saccharomyces cerevisiae. We show that the proposed replication error-the ligation of leading and lagging nascent strands to create "closed" forks-can occur in vitro at short, interrupted inverted repeats. The removal of molecules with two closed forks results in a hairpin-capped linear duplex that we show replicates in vivo to create an inverted, dimeric plasmid that subsequently integrates into the genome by homologous recombination, creating an inverted triplication. While other models have been proposed to explain inverted triplications and their derivatives, our model can also explain the generation of human, de novo, inverted amplicons that have a 2:1 mixture of sequences from both homologues of a single parent--a feature readily explained by a plasmid intermediate that arises from one homologue and integrates into the other homologue prior to meiosis. Our tests of key features of ODIRA lend support to this mechanism and suggest further avenues of enquiry to unravel the origins of interstitial, inverted CNVs pivotal in human health and evolution.

  11. Application of linear mixed-effects model with LASSO to identify metal components associated with cardiac autonomic responses among welders: a repeated measures study

    PubMed Central

    Zhang, Jinming; Cavallari, Jennifer M; Fang, Shona C; Weisskopf, Marc G; Lin, Xihong; Mittleman, Murray A; Christiani, David C

    2017-01-01

    Background Environmental and occupational exposure to metals is ubiquitous worldwide, and understanding the hazardous metal components in this complex mixture is essential for environmental and occupational regulations. Objective To identify hazardous components from metal mixtures that are associated with alterations in cardiac autonomic responses. Methods Urinary concentrations of 16 types of metals were examined and ‘acceleration capacity’ (AC) and ‘deceleration capacity’ (DC), indicators of cardiac autonomic effects, were quantified from ECG recordings among 54 welders. We fitted linear mixed-effects models with least absolute shrinkage and selection operator (LASSO) to identify metal components that are associated with AC and DC. The Bayesian Information Criterion was used as the criterion for model selection procedures. Results Mercury and chromium were selected for DC analysis, whereas mercury, chromium and manganese were selected for AC analysis through the LASSO approach. When we fitted the linear mixed-effects models with ‘selected’ metal components only, the effect of mercury remained significant. Every 1 µg/L increase in urinary mercury was associated with −0.58 ms (−1.03, –0.13) changes in DC and 0.67 ms (0.25, 1.10) changes in AC. Conclusion Our study suggests that exposure to several metals is associated with impaired cardiac autonomic functions. Our findings should be replicated in future studies with larger sample sizes. PMID:28663305

  12. Evaluation of Quantitative Performance of Sequential Immobilized Metal Affinity Chromatographic Enrichment for Phosphopeptides

    PubMed Central

    Sun, Zeyu; Hamilton, Karyn L.; Reardon, Kenneth F.

    2014-01-01

    We evaluated a sequential elution protocol from immobilized metal affinity chromatography (SIMAC) employing gallium-based immobilized metal affinity chromatography (IMAC) in conjunction with titanium-dioxide-based metal oxide affinity chromatography (MOAC). The quantitative performance of this SIMAC enrichment approach, assessed in terms of repeatability, dynamic range, and linearity, was evaluated using a mixture composed of tryptic peptides from caseins, bovine serum albumin, and phosphopeptide standards. While our data demonstrate the overall consistent performance of the SIMAC approach under various loading conditions, the results also revealed that the method had limited repeatability and linearity for most phosphopeptides tested, and different phosphopeptides were found to have different linear ranges. These data suggest that, unless additional strategies are used, SIMAC should be regarded as a semi-quantitative method when used in large-scale phosphoproteomics studies in complex backgrounds. PMID:24096195

  13. Estimation of stature using anthropometry of feet and footprints in a Western Australian population.

    PubMed

    Hemy, Naomi; Flavel, Ambika; Ishak, Nur-Intaniah; Franklin, Daniel

    2013-07-01

    The aim of the study is to develop accurate stature estimation models for a contemporary Western Australian population from measurements of the feet and footprints. The sample comprises 200 adults (90 males, 110 females). A stature measurement, three linear measurements from each foot and bilateral footprints were collected from each subject. Seven linear measurements were then extracted from each print. Prior to data collection, a precision test was conducted to determine the repeatability of measurement acquisition. The primary data were then analysed using a range of parametric statistical tests. Results show that all foot and footprint measurements were significantly (P < 0.01-0.001) correlated with stature and estimation models were formulated with a prediction accuracy of ± 4.673 cm to ± 6.926 cm. Left foot length was the most accurate single variable in the simple linear regressions (males: ± 5.065 cm; females: ± 4.777 cm). This study provides viable alternatives for estimating stature in a Western Australian population that are equivalent to established standards developed from foot bones. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  14. The Box-Cox power transformation on nursing sensitive indicators: Does it matter if structural effects are omitted during the estimation of the transformation parameter?

    PubMed Central

    2011-01-01

    Background Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Methods Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI®) for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Results Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. Conclusions The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects. PMID:21854614

  15. The Box-Cox power transformation on nursing sensitive indicators: does it matter if structural effects are omitted during the estimation of the transformation parameter?

    PubMed

    Hou, Qingjiang; Mahnken, Jonathan D; Gajewski, Byron J; Dunton, Nancy

    2011-08-19

    Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI® for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects.

  16. Joint modeling of longitudinal data and discrete-time survival outcome.

    PubMed

    Qiu, Feiyou; Stein, Catherine M; Elston, Robert C

    2016-08-01

    A predictive joint shared parameter model is proposed for discrete time-to-event and longitudinal data. A discrete survival model with frailty and a generalized linear mixed model for the longitudinal data are joined to predict the probability of events. This joint model focuses on predicting discrete time-to-event outcome, taking advantage of repeated measurements. We show that the probability of an event in a time window can be more precisely predicted by incorporating the longitudinal measurements. The model was investigated by comparison with a two-step model and a discrete-time survival model. Results from both a study on the occurrence of tuberculosis and simulated data show that the joint model is superior to the other models in discrimination ability, especially as the latent variables related to both survival times and the longitudinal measurements depart from 0. © The Author(s) 2013.

  17. Quantification and parametrization of non-linearity effects by higher-order sensitivity terms in scattered light differential optical absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Puķīte, Jānis; Wagner, Thomas

    2016-05-01

    We address the application of differential optical absorption spectroscopy (DOAS) of scattered light observations in the presence of strong absorbers (in particular ozone), for which the absorption optical depth is a non-linear function of the trace gas concentration. This is the case because Beer-Lambert law generally does not hold for scattered light measurements due to many light paths contributing to the measurement. While in many cases linear approximation can be made, for scenarios with strong absorptions non-linear effects cannot always be neglected. This is especially the case for observation geometries, for which the light contributing to the measurement is crossing the atmosphere under spatially well-separated paths differing strongly in length and location, like in limb geometry. In these cases, often full retrieval algorithms are applied to address the non-linearities, requiring iterative forward modelling of absorption spectra involving time-consuming wavelength-by-wavelength radiative transfer modelling. In this study, we propose to describe the non-linear effects by additional sensitivity parameters that can be used e.g. to build up a lookup table. Together with widely used box air mass factors (effective light paths) describing the linear response to the increase in the trace gas amount, the higher-order sensitivity parameters eliminate the need for repeating the radiative transfer modelling when modifying the absorption scenario even in the presence of a strong absorption background. While the higher-order absorption structures can be described as separate fit parameters in the spectral analysis (so-called DOAS fit), in practice their quantitative evaluation requires good measurement quality (typically better than that available from current measurements). Therefore, we introduce an iterative retrieval algorithm correcting for the higher-order absorption structures not yet considered in the DOAS fit as well as the absorption dependence on temperature and scattering processes.

  18. Digital Biomass Accumulation Using High-Throughput Plant Phenotype Data Analysis.

    PubMed

    Rahaman, Md Matiur; Ahsan, Md Asif; Gillani, Zeeshan; Chen, Ming

    2017-09-01

    Biomass is an important phenotypic trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive, and they require numerous individuals to be cultivated for repeated measurements. With the advent of image-based high-throughput plant phenotyping facilities, non-destructive biomass measuring methods have attempted to overcome this problem. Thus, the estimation of plant biomass of individual plants from their digital images is becoming more important. In this paper, we propose an approach to biomass estimation based on image derived phenotypic traits. Several image-based biomass studies state that the estimation of plant biomass is only a linear function of the projected plant area in images. However, we modeled the plant volume as a function of plant area, plant compactness, and plant age to generalize the linear biomass model. The obtained results confirm the proposed model and can explain most of the observed variance during image-derived biomass estimation. Moreover, a small difference was observed between actual and estimated digital biomass, which indicates that our proposed approach can be used to estimate digital biomass accurately.

  19. Efficient strategies for leave-one-out cross validation for genomic best linear unbiased prediction.

    PubMed

    Cheng, Hao; Garrick, Dorian J; Fernando, Rohan L

    2017-01-01

    A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model. Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis. Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.

  20. Cognitive outcome in adolescents and young adults after repeat courses of antenatal corticosteroids.

    PubMed

    Stålnacke, Johanna; Diaz Heijtz, Rochellys; Norberg, Hanna; Norman, Mikael; Smedler, Ann-Charlotte; Forssberg, Hans

    2013-08-01

    To investigate whether repeat courses of antenatal corticosteroids have long-term effects on cognitive and psychological functioning. In a prospective cohort study, 58 adolescents and young adults (36 males) who had been exposed to 2-9 weekly courses of betamethasone in utero were assessed with neuropsychological tests and behavior self-reports. Unexposed subjects (n = 44, 25 males) matched for age, sex, and gestational age at birth served as a comparison group. In addition, individuals exposed in utero to a single course (n = 25, 14 males) were included for dose-response analysis. Group differences were investigated using multilevel linear modeling. Mean scores obtained in 2 measures of attention and speed were significantly lower in subjects exposed to 2 or more antenatal corticosteroids courses (Symbol Search, P = .009; Digit Span Forward, P = .02), but these were not dose-dependent. Exposure to repeat courses of antenatal corticosteroids was not associated with general deficits in higher cognitive functions, self-reported attention, adaptability, or overall psychological function. Although this study indicates that repeat exposure to antenatal corticosteroids may have an impact on aspects of executive functioning, it does not provide support for the prevailing concern that such fetal exposure will have a major adverse impact on cognitive functions and psychological health later in life. Copyright © 2013 Mosby, Inc. All rights reserved.

  1. Assessing disease stress and modeling yield losses in alfalfa

    NASA Astrophysics Data System (ADS)

    Guan, Jie

    Alfalfa is the most important forage crop in the U.S. and worldwide. Fungal foliar diseases are believed to cause significant yield losses in alfalfa, yet, little quantitative information exists regarding the amount of crop loss. Different fungicides and application frequencies were used as tools to generate a range of foliar disease intensities in Ames and Nashua, IA. Visual disease assessments (disease incidence, disease severity, and percentage defoliation) were obtained weekly for each alfalfa growth cycle (two to three growing cycles per season). Remote sensing assessments were performed using a hand-held, multispectral radiometer to measure the amount and quality of sunlight reflected from alfalfa canopies. Factors such as incident radiation, sun angle, sensor height, and leaf wetness were all found to significantly affect the percentage reflectance of sunlight reflected from alfalfa canopies. The precision of visual and remote sensing assessment methods was quantified. Precision was defined as the intra-rater repeatability and inter-rater reliability of assessment methods. F-tests, slopes, intercepts, and coefficients of determination (R2) were used to compare assessment methods for precision. Results showed that among the three visual disease assessment methods (disease incidence, disease severity, and percentage defoliation), percentage defoliation had the highest intra-rater repeatability and inter-rater reliability. Remote sensing assessment method had better precision than the percentage defoliation assessment method based upon higher intra-rater repeatability and inter-rater reliability. Significant linear relationships between canopy reflectance (810 nm), percentage defoliation and yield were detected using linear regression and percentage reflectance (810 nm) assessments were found to have a stronger relationship with yield than percentage defoliation assessments. There were also significant linear relationships between percentage defoliation, dry weight, percentage reflectance (810 nm), and green leaf area index (GLAI). Percentage reflectance (810 nm) assessments had a stronger relationship with dry weight and green leaf area index than percentage defoliation assessments. Our research conclusively demonstrates that percentage reflectance measurements can be used to nondestructively assess green leaf area index which is a direct measure of plant health and an indirect measure of productivity. This research conclusively demonstrates that remote sensing is superior to visual assessment method to assess alfalfa stress and to model yield and GLAI in the alfalfa foliar disease pathosystem.

  2. Feasibility and acceptability of cell phone diaries to measure HIV risk behavior among female sex workers.

    PubMed

    Roth, Alexis M; Hensel, Devon J; Fortenberry, J Dennis; Garfein, Richard S; Gunn, Jayleen K L; Wiehe, Sarah E

    2014-12-01

    Individual, social, and structural factors affecting HIV risk behaviors among female sex workers (FSWs) are difficult to assess using retrospective surveys methods. To test the feasibility and acceptability of cell phone diaries to collect information about sexual events, we recruited 26 FSWs in Indianapolis, Indiana (US). Over 4 weeks, FSWs completed twice daily digital diaries about their mood, drug use, sexual interactions, and daily activities. Feasibility was assessed using repeated measures general linear modeling and descriptive statistics examined event-level contextual information and acceptability. Of 1,420 diaries expected, 90.3 % were completed by participants and compliance was stable over time (p > .05 for linear trend). Sexual behavior was captured in 22 % of diaries and participant satisfaction with diary data collection was high. These data provide insight into event-level factors impacting HIV risk among FSWs. We discuss implications for models of sexual behavior and individually tailored interventions to prevent HIV in this high-risk group.

  3. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  4. Evaluation of Linear, Inviscid, Viscous, and Reduced-Order Modeling Aeroelastic Solutions of the AGARD 445.6 Wing Using Root Locus Analysis

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Perry, Boyd III; Chwalowski, Pawel

    2014-01-01

    Reduced-order modeling (ROM) methods are applied to the CFD-based aeroelastic analysis of the AGARD 445.6 wing in order to gain insight regarding well-known discrepancies between the aeroelastic analyses and the experimental results. The results presented include aeroelastic solutions using the inviscid CAP-TSD code and the FUN3D code (Euler and Navier-Stokes). Full CFD aeroelastic solutions and ROM aeroelastic solutions, computed at several Mach numbers, are presented in the form of root locus plots in order to better reveal the aeroelastic root migrations with increasing dynamic pressure. Important conclusions are drawn from these results including the ability of the linear CAP-TSD code to accurately predict the entire experimental flutter boundary (repeat of analyses performed in the 1980's), that the Euler solutions at supersonic conditions indicate that the third mode is always unstable, and that the FUN3D Navier-Stokes solutions stabilize the unstable third mode seen in the Euler solutions.

  5. Analysis of FMR1 gene expression in female premutation carriers using robust segmented linear regression models

    PubMed Central

    García-Alegría, Eva; Ibáñez, Berta; Mínguez, Mónica; Poch, Marisa; Valiente, Alberto; Sanz-Parra, Arantza; Martinez-Bouzas, Cristina; Beristain, Elena; Tejada, Maria-Isabel

    2007-01-01

    Fragile X syndrome is caused by the absence or reduction of the fragile X mental retardation protein (FMRP) because FMR1 gene expression is reduced. Alleles with repeat sizes of 55–200 are classified as premutations, and it has been demonstrated that FMR1 expression is elevated in the premutation range. However, the majority of the studies reported were performed in males. We studied FMR1 expression in 100 female fragile X family members from the northern region of Spain using quantitative (fluorescence) real-time polymerase chain reaction. Of these 100 women, 19 had normal alleles, 19 were full mutation carriers, and 62 were premutation carriers. After confirming differences between the three groups of females, and increased levels of the FMR1 transcript among premutation carriers, we found that the relationship between mRNA levels and repeat size is nonlinear. These results were obtained using a novel methodology that, based on the size of the CGG repeats, allows us to find out the most probable threshold from which the relationship between CGG repeat number and mRNA levels changes. Using this approach, a significant positive correlation between CGG repeats and total mRNA levels has been found in the premutation range <100 CGG, but this correlation diminishes from 100 onward. However, when correcting by the X inactivation ratio, mRNA levels increase as the number of CGG repeats increases, and this increase is highly significant over 100 CGG. We suggest that due to skewed X inactivation, mRNA levels tend to normalize in females when the number of CGG repeats increases. PMID:17449730

  6. DNA is structured as a linear "jigsaw puzzle" in the genomes of Arabidopsis, rice, and budding yeast.

    PubMed

    Liu, Yun-Hua; Zhang, Meiping; Wu, Chengcang; Huang, James J; Zhang, Hong-Bin

    2014-01-01

    Knowledge of how a genome is structured and organized from its constituent elements is crucial to understanding its biology and evolution. Here, we report the genome structuring and organization pattern as revealed by systems analysis of the sequences of three model species, Arabidopsis, rice and yeast, at the whole-genome and chromosome levels. We found that all fundamental function elements (FFE) constituting the genomes, including genes (GEN), DNA transposable elements (DTE), retrotransposable elements (RTE), simple sequence repeats (SSR), and (or) low complexity repeats (LCR), are structured in a nonrandom and correlative manner, thus leading to a hypothesis that the DNA of the species is structured as a linear "jigsaw puzzle". Furthermore, we showed that different FFE differ in their importance in the formation and evolution of the DNA jigsaw puzzle structure between species. DTE and RTE play more important roles than GEN, LCR, and SSR in Arabidopsis, whereas GEN and RTE play more important roles than LCR, SSR, and DTE in rice. The genes having multiple recognized functions play more important roles than those having single functions. These results provide useful knowledge necessary for better understanding genome biology and evolution of the species and for effective molecular breeding of rice.

  7. Segmented and "equivalent" representation of the cable equation.

    PubMed

    Andrietti, F; Bernardini, G

    1984-11-01

    The linear cable theory has been applied to a modular structure consisting of n repeating units each composed of two subunits with different values of resistance and capacitance. For n going to infinity, i.e., for infinite cables, we have derived analytically the Laplace transform of the solution by making use of a difference method and we have inverted it by means of a numerical procedure. The results have been compared with those obtained by the direct application of the cable equation to a simplified nonmodular model with "equivalent" electrical parameters. The implication of our work in the analysis of the time and space course of the potential of real fibers has been discussed. In particular, we have shown that the simplified ("equivalent") model is a very good representation of the segmented model for the nodal regions of myelinated fibers in a steady situation and in every condition for muscle fibers. An approximate solution for the steady potential of myelinated fibers has been derived for both nodal and internodal regions. The applications of our work to other cases dealing with repeating structures, such as earthworm giant fibers, have been discussed and our results have been compared with other attempts to solve similar problems.

  8. A repeated measures model for analysis of continuous outcomes in sequential parallel comparison design studies.

    PubMed

    Doros, Gheorghe; Pencina, Michael; Rybin, Denis; Meisner, Allison; Fava, Maurizio

    2013-07-20

    Previous authors have proposed the sequential parallel comparison design (SPCD) to address the issue of high placebo response rate in clinical trials. The original use of SPCD focused on binary outcomes, but recent use has since been extended to continuous outcomes that arise more naturally in many fields, including psychiatry. Analytic methods proposed to date for analysis of SPCD trial continuous data included methods based on seemingly unrelated regression and ordinary least squares. Here, we propose a repeated measures linear model that uses all outcome data collected in the trial and accounts for data that are missing at random. An appropriate contrast formulated after the model has been fit can be used to test the primary hypothesis of no difference in treatment effects between study arms. Our extensive simulations show that when compared with the other methods, our approach preserves the type I error even for small sample sizes and offers adequate power and the smallest mean squared error under a wide variety of assumptions. We recommend consideration of our approach for analysis of data coming from SPCD trials. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Why humans might help strangers

    PubMed Central

    Raihani, Nichola J.; Bshary, Redouan

    2015-01-01

    Humans regularly help strangers, even when interactions are apparently unobserved and unlikely to be repeated. Such situations have been simulated in the laboratory using anonymous one-shot games (e.g., prisoner’s dilemma) where the payoff matrices used make helping biologically altruistic. As in real-life, participants often cooperate in the lab in these one-shot games with non-relatives, despite that fact that helping is under negative selection under these circumstances. Two broad explanations for such behavior prevail. The “big mistake” or “mismatch” theorists argue that behavior is constrained by psychological mechanisms that evolved predominantly in the context of repeated interactions with known individuals. In contrast, the cultural group selection theorists posit that humans have been selected to cooperate in anonymous one-shot interactions due to strong between-group competition, which creates interdependence among in-group members. We present these two hypotheses before discussing alternative routes by which humans could increase their direct fitness by cooperating with strangers under natural conditions. In doing so, we explain why the standard lab games do not capture real-life in various important aspects. First, asymmetries in the cost of perceptual errors regarding the context of the interaction (one-shot vs. repeated; anonymous vs. public) might have selected for strategies that minimize the chance of making costly behavioral errors. Second, helping strangers might be a successful strategy for identifying other cooperative individuals in the population, where partner choice can turn strangers into interaction partners. Third, in contrast to the assumptions of the prisoner’s dilemma model, it is possible that benefits of cooperation follow a non-linear function of investment. Non-linear benefits result in negative frequency dependence even in one-shot games. Finally, in many real-world situations individuals are able to parcel investments such that a one-shot interaction is turned into a repeated game of many decisions. PMID:25750619

  10. Rate-loss analysis of an efficient quantum repeater architecture

    NASA Astrophysics Data System (ADS)

    Guha, Saikat; Krovi, Hari; Fuchs, Christopher A.; Dutton, Zachary; Slater, Joshua A.; Simon, Christoph; Tittel, Wolfgang

    2015-08-01

    We analyze an entanglement-based quantum key distribution (QKD) architecture that uses a linear chain of quantum repeaters employing photon-pair sources, spectral-multiplexing, linear-optic Bell-state measurements, multimode quantum memories, and classical-only error correction. Assuming perfect sources, we find an exact expression for the secret-key rate, and an analytical description of how errors propagate through the repeater chain, as a function of various loss-and-noise parameters of the devices. We show via an explicit analytical calculation, which separately addresses the effects of the principle nonidealities, that this scheme achieves a secret-key rate that surpasses the Takeoka-Guha-Wilde bound—a recently found fundamental limit to the rate-vs-loss scaling achievable by any QKD protocol over a direct optical link—thereby providing one of the first rigorous proofs of the efficacy of a repeater protocol. We explicitly calculate the end-to-end shared noisy quantum state generated by the repeater chain, which could be useful for analyzing the performance of other non-QKD quantum protocols that require establishing long-distance entanglement. We evaluate that shared state's fidelity and the achievable entanglement-distillation rate, as a function of the number of repeater nodes, total range, and various loss-and-noise parameters of the system. We extend our theoretical analysis to encompass sources with nonzero two-pair-emission probability, using an efficient exact numerical evaluation of the quantum state propagation and measurements. We expect our results to spur formal rate-loss analysis of other repeater protocols and also to provide useful abstractions to seed analyses of quantum networks of complex topologies.

  11. Suicide Attempts in a Longitudinal Sample of Adolescents Followed Through Adulthood: Evidence of Escalation

    PubMed Central

    Goldston, David B.; Daniel, Stephanie S.; Erkanli, Alaattin; Heilbron, Nicole; Doyle, Otima; Weller, Bridget; Sapyta, Jeffrey

    2015-01-01

    Objectives This study was designed to examine escalation in repeat suicide attempts from adolescence through adulthood, as predicted by sensitization models (and reflected in increasing intent and lethality with repeat attempts, decreasing amount of time between attempts, and decreasing stress to trigger attempts) Method In a prospective study of 180 adolescents followed through adulthood after a psychiatric hospitalization, suicide attempts and antecedent life events were repeatedly assessed (M = 12.6 assessments, SD = 5.1) over an average of 13 years, 6 months (SD = 4 years, 5 months). Multivariate logistic, multiple linear, and negative binomial regression models were used to examine patterns over time. Results After age 17-18, the majority of suicide attempts were repeat attempts (i.e., made by individuals with prior suicidal behavior). Intent increased both with increasing age, and with number of prior attempts. Medical lethality increased as a function of age but not recurrent attempts. The time between successive suicide attempts decreased as a function of number of attempts. The amount of precipitating life stress was not related to attempts. Conclusions Adolescents and young adults show evidence of escalation of recurrent suicidal behavior, with increasing suicidal intent and decreasing time between successive attempts. However, evidence that sensitization processes account for this escalation was inconclusive. Effective prevention programs that reduce the likelihood of individuals attempting suicide for the first time (and entering this cycle of escalation), and relapse prevention interventions that interrupt the cycle of escalating suicidal behavior among individuals who already have made attempts are critically needed. PMID:25622200

  12. Origin-Dependent Inverted-Repeat Amplification: Tests of a Model for Inverted DNA Amplification

    PubMed Central

    Brewer, Bonita J.; Payen, Celia; Di Rienzi, Sara C.; Higgins, Megan M.; Ong, Giang; Dunham, Maitreya J.; Raghuraman, M. K.

    2015-01-01

    DNA replication errors are a major driver of evolution—from single nucleotide polymorphisms to large-scale copy number variations (CNVs). Here we test a specific replication-based model to explain the generation of interstitial, inverted triplications. While no genetic information is lost, the novel inversion junctions and increased copy number of the included sequences create the potential for adaptive phenotypes. The model—Origin-Dependent Inverted-Repeat Amplification (ODIRA)—proposes that a replication error at pre-existing short, interrupted, inverted repeats in genomic sequences generates an extrachromosomal, inverted dimeric, autonomously replicating intermediate; subsequent genomic integration of the dimer yields this class of CNV without loss of distal chromosomal sequences. We used a combination of in vitro and in vivo approaches to test the feasibility of the proposed replication error and its downstream consequences on chromosome structure in the yeast Saccharomyces cerevisiae. We show that the proposed replication error—the ligation of leading and lagging nascent strands to create “closed” forks—can occur in vitro at short, interrupted inverted repeats. The removal of molecules with two closed forks results in a hairpin-capped linear duplex that we show replicates in vivo to create an inverted, dimeric plasmid that subsequently integrates into the genome by homologous recombination, creating an inverted triplication. While other models have been proposed to explain inverted triplications and their derivatives, our model can also explain the generation of human, de novo, inverted amplicons that have a 2:1 mixture of sequences from both homologues of a single parent—a feature readily explained by a plasmid intermediate that arises from one homologue and integrates into the other homologue prior to meiosis. Our tests of key features of ODIRA lend support to this mechanism and suggest further avenues of enquiry to unravel the origins of interstitial, inverted CNVs pivotal in human health and evolution. PMID:26700858

  13. The human brain processes repeated auditory feature conjunctions of low sequential probability.

    PubMed

    Ruusuvirta, Timo; Huotilainen, Minna

    2004-01-23

    The human brain is known to preattentively trace repeated sounds as holistic entities. It is not clear, however, whether the same holds true if these sounds are rare among other repeated sounds. Adult humans passively listened to a repeated tone with frequent (standard) and rare (deviant) conjunctions of its three features. Six equiprobable variants per conjunction type were assigned from a space built from these features so that the standard variants (P=0.15 each) were not inseparably traceable by means of their linear alignment in this space. Differential scalp-recorded event-related potentials to deviants indicate that the standard variants were traced as repeated wholes despite their preperceptual distinctiveness and resulting rarity among one another.

  14. A closed-loop artificial pancreas using a proportional integral derivative with double phase lead controller based on a new nonlinear model of glucose metabolism.

    PubMed

    Abbes, Ilham Ben; Richard, Pierre-Yves; Lefebvre, Marie-Anne; Guilhem, Isabelle; Poirier, Jean-Yves

    2013-05-01

    Most closed-loop insulin delivery systems rely on model-based controllers to control the blood glucose (BG) level. Simple models of glucose metabolism, which allow easy design of the control law, are limited in their parametric identification from raw data. New control models and controllers issued from them are needed. A proportional integral derivative with double phase lead controller was proposed. Its design was based on a linearization of a new nonlinear control model of the glucose-insulin system in type 1 diabetes mellitus (T1DM) patients validated with the University of Virginia/Padova T1DM metabolic simulator. A 36 h scenario, including six unannounced meals, was tested in nine virtual adults. A previous trial database has been used to compare the performance of our controller with their previous results. The scenario was repeated 25 times for each adult in order to take continuous glucose monitoring noise into account. The primary outcome was the time BG levels were in target (70-180 mg/dl). Blood glucose values were in the target range for 77% of the time and below 50 mg/dl and above 250 mg/dl for 0.8% and 0.3% of the time, respectively. The low blood glucose index and high blood glucose index were 1.65 and 3.33, respectively. The linear controller presented, based on the linearization of a new easily identifiable nonlinear model, achieves good glucose control with low exposure to hypoglycemia and hyperglycemia. © 2013 Diabetes Technology Society.

  15. Effect of Anisotropy on the Resilient Behaviour of a Granular Material in Low Traffic Pavement

    PubMed Central

    Jing, Peng; Nowamooz, Hossein; Chazallon, Cyrille

    2017-01-01

    Granular materials are often used in pavement structures. The influence of anisotropy on the mechanical behaviour of granular materials is very important. The coupled effects of water content and fine content usually lead to more complex anisotropic behaviour. With a repeated load triaxial test (RLTT), it is possible to measure the anisotropic deformation behaviour of granular materials. This article initially presents an experimental study of the resilient repeated load response of a compacted clayey natural sand with three fine contents and different water contents. Based on anisotropic behaviour, the non-linear resilient model (Boyce model) is improved by the radial anisotropy coefficient γ3 instead of the axial anisotropy coefficient γ1. The results from both approaches (γ1 and γ3) are compared with the measured volumetric and deviatoric responses. These results confirm the capacity of the improved model to capture the general trend of the experiments. Finally, finite element calculations are performed with CAST3M in order to validate the improvement of the modified Boyce model (from γ1 to γ3). The modelling results indicate that the modified Boyce model with γ3 is more widely available in different water contents and different fine contents for this granular material. Besides, based on the results, the coupled effects of water content and fine content on the deflection of the structures can also be observed. PMID:29207504

  16. Effect of Anisotropy on the Resilient Behaviour of a Granular Material in Low Traffic Pavement.

    PubMed

    Jing, Peng; Nowamooz, Hossein; Chazallon, Cyrille

    2017-12-03

    Granular materials are often used in pavement structures. The influence of anisotropy on the mechanical behaviour of granular materials is very important. The coupled effects of water content and fine content usually lead to more complex anisotropic behaviour. With a repeated load triaxial test (RLTT), it is possible to measure the anisotropic deformation behaviour of granular materials. This article initially presents an experimental study of the resilient repeated load response of a compacted clayey natural sand with three fine contents and different water contents. Based on anisotropic behaviour, the non-linear resilient model (Boyce model) is improved by the radial anisotropy coefficient γ ₃ instead of the axial anisotropy coefficient γ ₁. The results from both approaches ( γ ₁ and γ ₃) are compared with the measured volumetric and deviatoric responses. These results confirm the capacity of the improved model to capture the general trend of the experiments. Finally, finite element calculations are performed with CAST3M in order to validate the improvement of the modified Boyce model (from γ ₁ to γ ₃). The modelling results indicate that the modified Boyce model with γ ₃ is more widely available in different water contents and different fine contents for this granular material. Besides, based on the results, the coupled effects of water content and fine content on the deflection of the structures can also be observed.

  17. Sources of variation for indoor nitrogen dioxide in rural residences of Ethiopia

    PubMed Central

    2009-01-01

    Background Unprocessed biomass fuel is the primary source of indoor air pollution (IAP) in developing countries. The use of biomass fuel has been linked with acute respiratory infections. This study assesses sources of variations associated with the level of indoor nitrogen dioxide (NO2). Materials and methods This study examines household factors affecting the level of indoor pollution by measuring NO2. Repeated measurements of NO2 were made using a passive diffusive sampler. A Saltzman colorimetric method using a spectrometer calibrated at 540 nm was employed to analyze the mass of NO2 on the collection filter that was then subjected to a mass transfer equation to calculate the level of NO2 for the 24 hours of sampling duration. Structured questionnaire was used to collect data on fuel use characteristics. Data entry and cleaning was done in EPI INFO version 6.04, while data was analyzed using SPSS version 15.0. Analysis of variance, multiple linear regression and linear mixed model were used to isolate determining factors contributing to the variation of NO2 concentration. Results A total of 17,215 air samples were fully analyzed during the study period. Wood and crop were principal source of household energy. Biomass fuel characteristics were strongly related to indoor NO2 concentration in one-way analysis of variance. There was variation in repeated measurements of indoor NO2 over time. In a linear mixed model regression analysis, highland setting, wet season, cooking, use of fire events at least twice a day, frequency of cooked food items, and interaction between ecology and season were predictors of indoor NO2 concentration. The volume of the housing unit and the presence of kitchen showed little relevance in the level of NO2 concentration. Conclusion Agro-ecology, season, purpose of fire events, frequency of fire activities, frequency of cooking and physical conditions of housing are predictors of NO2 concentration. Improved kitchen conditions and ventilation are highly recommended. PMID:19922645

  18. An artificial neural network improves prediction of observed survival in patients with laryngeal squamous carcinoma.

    PubMed

    Jones, Andrew S; Taktak, Azzam G F; Helliwell, Timothy R; Fenton, John E; Birchall, Martin A; Husband, David J; Fisher, Anthony C

    2006-06-01

    The accepted method of modelling and predicting failure/survival, Cox's proportional hazards model, is theoretically inferior to neural network derived models for analysing highly complex systems with large datasets. A blinded comparison of the neural network versus the Cox's model in predicting survival utilising data from 873 treated patients with laryngeal cancer. These were divided randomly and equally into a training set and a study set and Cox's and neural network models applied in turn. Data were then divided into seven sets of binary covariates and the analysis repeated. Overall survival was not significantly different on Kaplan-Meier plot, or with either test model. Although the network produced qualitatively similar results to Cox's model it was significantly more sensitive to differences in survival curves for age and N stage. We propose that neural networks are capable of prediction in systems involving complex interactions between variables and non-linearity.

  19. Using the NASTRAN Thermal Analyzer to simulate a flight scientific instrument package

    NASA Technical Reports Server (NTRS)

    Lee, H.-P.; Jackson, C. E., Jr.

    1974-01-01

    The NASTRAN Thermal Analyzer has proven to be a unique and useful tool for thermal analyses involving large and complex structures where small, thermally induced deformations are critical. Among its major advantages are direct grid point-to-grid point compatibility with large structural models; plots of the model that may be generated for both conduction and boundary elements; versatility of applying transient thermal loads especially to repeat orbital cycles; on-line printer plotting of temperatures and rate of temperature changes as a function of time; and direct matrix input to solve linear differential equations on-line. These features provide a flexibility far beyond that available in most finite-difference thermal analysis computer programs.

  20. The contribution of benzene to smoking-induced leukemia.

    PubMed

    Korte, J E; Hertz-Picciotto, I; Schulz, M R; Ball, L M; Duell, E J

    2000-04-01

    Cigarette smoking is associated with an increased risk of leukemia; benzene, an established leukemogen, is present in cigarette smoke. By combining epidemiologic data on the health effects of smoking with risk assessment techniques for low-dose extrapolation, we assessed the proportion of smoking-induced total leukemia and acute myeloid leukemia (AML) attributable to the benzene in cigarette smoke. We fit both linear and quadratic models to data from two benzene-exposed occupational cohorts to estimate the leukemogenic potency of benzene. Using multiple-decrement life tables, we calculated lifetime risks of total leukemia and AML deaths for never, light, and heavy smokers. We repeated these calculations, removing the effect of benzene in cigarettes based on the estimated potencies. From these life tables we determined smoking-attributable risks and benzene-attributable risks. The ratio of the latter to the former constitutes the proportion of smoking-induced cases attributable to benzene. Based on linear potency models, the benzene in cigarette smoke contributed from 8 to 48% of smoking-induced total leukemia deaths [95% upper confidence limit (UCL), 20-66%], and from 12 to 58% of smoking-induced AML deaths (95% UCL, 19-121%). The inclusion of a quadratic term yielded results that were comparable; however, potency models with only quadratic terms resulted in much lower attributable fractions--all < 1%. Thus, benzene is estimated to be responsible for approximately one-tenth to one-half of smoking-induced total leukemia mortality and up to three-fifths of smoking-related AML mortality. In contrast to theoretical arguments that linear models substantially overestimate low-dose risk, linear extrapolations from empirical data over a dose range of 10- to 100-fold resulted in plausible predictions.

  1. Population pharmacokinetics of caffeine in healthy male adults using mixed-effects models.

    PubMed

    Seng, K-Y; Fun, C-Y; Law, Y-L; Lim, W-M; Fan, W; Lim, C-L

    2009-02-01

    Caffeine has been shown to maintain or improve the performance of individuals, but its pharmacokinetic profile for Asians has not been well characterized. In this study, a population pharmacokinetic model for describing the pharmacokinetics of caffeine in Singapore males was developed. The data were also analysed using non-compartmental models. Data gathered from 59 male volunteers, who each ingested a single caffeine capsule in two clinical trials (3 or 5 mg/kg), were analysed via non-linear mixed-effects modelling. The participants' covariates, including age, body weight, and regularity of caffeinated-beverage consumption or smoking, were analysed in a stepwise fashion to identify their potential influence on caffeine pharmacokinetics. The final pharmacostatistical model was then subjected to stochastic simulation to predict the plasma concentrations of caffeine after oral (204, 340 and 476 mg) dosing regimens (repeated dosing every 6, 8 or 12 h) over a hypothetical 3-day period. The data were best described by a one-compartmental model with first-order absorption and first-order elimination. Smoking status was an influential covariate for clearance: clearance (mL/min) = 110*SMOKE + 114, where SMOKE was 0 and 1 for the non-smoker and the smoker respectively. Interoccasion variability was smaller compared to interindividual variability in clearance, volume and absorption rate (27% vs. 33%, 10% vs. 15% and 23% vs. 51% respectively). The extrapolated elimination half-lives of caffeine in the non-smokers and the smokers were 4.3 +/- 1.5 and 3.0 +/- 0.7 h respectively. Dosing simulations indicated that dosing regimens of 340 mg (repeated every 8 h) and 476 mg (repeated every 6 h) should achieve population-averaged caffeine concentrations within the reported beneficial range (4.5-9 microg/mL) in the non-smokers and the smokers respectively over 72 h. The population pharmacokinetic model satisfactorily described the disposition and variability of caffeine in the data. Mixed-effects modelling showed that the dose of caffeine depended on cigarette smoking status.

  2. Modeling equine race surface vertical mechanical behaviors in a musculoskeletal modeling environment.

    PubMed

    Symons, Jennifer E; Fyhrie, David P; Hawkins, David A; Upadhyaya, Shrinivasa K; Stover, Susan M

    2015-02-26

    Race surfaces have been associated with the incidence of racehorse musculoskeletal injury, the leading cause of racehorse attrition. Optimal race surface mechanical behaviors that minimize injury risk are unknown. Computational models are an economical method to determine optimal mechanical behaviors. Previously developed equine musculoskeletal models utilized ground reaction floor models designed to simulate a stiff, smooth floor appropriate for a human gait laboratory. Our objective was to develop a computational race surface model (two force-displacement functions, one linear and one nonlinear) that reproduced experimental race surface mechanical behaviors for incorporation in equine musculoskeletal models. Soil impact tests were simulated in a musculoskeletal modeling environment and compared to experimental force and displacement data collected during initial and repeat impacts at two racetracks with differing race surfaces - (i) dirt and (ii) synthetic. Best-fit model coefficients (7 total) were compared between surface types and initial and repeat impacts using a mixed model ANCOVA. Model simulation results closely matched empirical force, displacement and velocity data (Mean R(2)=0.930-0.997). Many model coefficients were statistically different between surface types and impacts. Principal component analysis of model coefficients showed systematic differences based on surface type and impact. In the future, the race surface model may be used in conjunction with previously developed the equine musculoskeletal models to understand the effects of race surface mechanical behaviors on limb dynamics, and determine race surface mechanical behaviors that reduce the incidence of racehorse musculoskeletal injury through modulation of limb dynamics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  4. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  5. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  6. 40 CFR 89.323 - NDIR analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...

  7. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  8. Heralded high-efficiency quantum repeater with atomic ensembles assisted by faithful single-photon transmission

    NASA Astrophysics Data System (ADS)

    Li, Tao; Deng, Fu-Guo

    2015-10-01

    Quantum repeater is one of the important building blocks for long distance quantum communication network. The previous quantum repeaters based on atomic ensembles and linear optical elements can only be performed with a maximal success probability of 1/2 during the entanglement creation and entanglement swapping procedures. Meanwhile, the polarization noise during the entanglement distribution process is harmful to the entangled channel created. Here we introduce a general interface between a polarized photon and an atomic ensemble trapped in a single-sided optical cavity, and with which we propose a high-efficiency quantum repeater protocol in which the robust entanglement distribution is accomplished by the stable spatial-temporal entanglement and it can in principle create the deterministic entanglement between neighboring atomic ensembles in a heralded way as a result of cavity quantum electrodynamics. Meanwhile, the simplified parity-check gate makes the entanglement swapping be completed with unity efficiency, other than 1/2 with linear optics. We detail the performance of our protocol with current experimental parameters and show its robustness to the imperfections, i.e., detuning and coupling variation, involved in the reflection process. These good features make it a useful building block in long distance quantum communication.

  9. Heralded high-efficiency quantum repeater with atomic ensembles assisted by faithful single-photon transmission.

    PubMed

    Li, Tao; Deng, Fu-Guo

    2015-10-27

    Quantum repeater is one of the important building blocks for long distance quantum communication network. The previous quantum repeaters based on atomic ensembles and linear optical elements can only be performed with a maximal success probability of 1/2 during the entanglement creation and entanglement swapping procedures. Meanwhile, the polarization noise during the entanglement distribution process is harmful to the entangled channel created. Here we introduce a general interface between a polarized photon and an atomic ensemble trapped in a single-sided optical cavity, and with which we propose a high-efficiency quantum repeater protocol in which the robust entanglement distribution is accomplished by the stable spatial-temporal entanglement and it can in principle create the deterministic entanglement between neighboring atomic ensembles in a heralded way as a result of cavity quantum electrodynamics. Meanwhile, the simplified parity-check gate makes the entanglement swapping be completed with unity efficiency, other than 1/2 with linear optics. We detail the performance of our protocol with current experimental parameters and show its robustness to the imperfections, i.e., detuning and coupling variation, involved in the reflection process. These good features make it a useful building block in long distance quantum communication.

  10. Heralded high-efficiency quantum repeater with atomic ensembles assisted by faithful single-photon transmission

    PubMed Central

    Li, Tao; Deng, Fu-Guo

    2015-01-01

    Quantum repeater is one of the important building blocks for long distance quantum communication network. The previous quantum repeaters based on atomic ensembles and linear optical elements can only be performed with a maximal success probability of 1/2 during the entanglement creation and entanglement swapping procedures. Meanwhile, the polarization noise during the entanglement distribution process is harmful to the entangled channel created. Here we introduce a general interface between a polarized photon and an atomic ensemble trapped in a single-sided optical cavity, and with which we propose a high-efficiency quantum repeater protocol in which the robust entanglement distribution is accomplished by the stable spatial-temporal entanglement and it can in principle create the deterministic entanglement between neighboring atomic ensembles in a heralded way as a result of cavity quantum electrodynamics. Meanwhile, the simplified parity-check gate makes the entanglement swapping be completed with unity efficiency, other than 1/2 with linear optics. We detail the performance of our protocol with current experimental parameters and show its robustness to the imperfections, i.e., detuning and coupling variation, involved in the reflection process. These good features make it a useful building block in long distance quantum communication. PMID:26502993

  11. Repeated aerosol-vapor JP-8 jet fuel exposure affects neurobehavior and neurotransmitter levels in a rat model.

    PubMed

    Baldwin, Carol M; Figueredo, Aurelio J; Wright, Lynda S; Wong, Simon S; Witten, Mark L

    2007-07-01

    Four groups of Fischer Brown Norway hybrid rats were exposed for 5, 10, 15, or 20 d to aerosolized-vapor jet propulsion fuel 8 (JP-8) compared to freely moving (5 and 10-d exposures) or sham-confined controls (15 and 20-d exposures). Behavioral testing utilized the U.S. Environmental Protection Agency Functional Observational Battery. Exploratory ethological factor analysis identified three salient factors (central nervous system [CNS] excitability, autonomic 1, and autonomic 2) for use in profiling JP-8 exposure in future studies. The factors were used as dependent variables in general linear modeling. Exposed animals were found to engage in more rearing and hyperaroused behavior compared to controls, replicating prior JP-8 exposure findings. Exposed animals also showed increasing but rapidly decelerating stool output (autonomic 1), and a significant increasing linear trend for urine output (autonomic 2). No significant trends were noted for either of the control groups for the autonomic factors. Rats from each of the groups for each of the time frames were randomly selected for tissue assay from seven brain regions for neurotransmitter levels. Hippocampal DOPAC was significantly elevated after 4-wk JP-8 exposure compared to both control groups, suggesting increased dopamine release and metabolism. Findings indicate that behavioral changes do not appear to manifest until wk 3 and 4 of exposure, suggesting the need for longitudinal studies to determine if these behaviors occur due to cumulative exposure, or due to behavioral sensitization related to repeated exposure to aerosolized-vapor JP-8.

  12. Fully automated screening of veterinary drugs in milk by turbulent flow chromatography and tandem mass spectrometry

    PubMed Central

    Stolker, Alida A. M.; Peters, Ruud J. B.; Zuiderent, Richard; DiBussolo, Joseph M.

    2010-01-01

    There is an increasing interest in screening methods for quick and sensitive analysis of various classes of veterinary drugs with limited sample pre-treatment. Turbulent flow chromatography in combination with tandem mass spectrometry has been applied for the first time as an efficient screening method in routine analysis of milk samples. Eight veterinary drugs, belonging to seven different classes were selected for this study. After developing and optimising the method, parameters such as linearity, repeatability, matrix effects and carry-over were studied. The screening method was then tested in the routine analysis of 12 raw milk samples. Even without internal standards, the linearity of the method was found to be good in the concentration range of 50 to 500 µg/L. Regarding repeatability, RSDs below 12% were obtained for all analytes, with only a few exceptions. The limits of detection were between 0.1 and 5.2 µg/L, far below the maximum residue levels for milk set by the EU regulations. While matrix effects—ion suppression or enhancement—are obtained for all the analytes the method has proved to be useful for screening purposes because of its sensitivity, linearity and repeatability. Furthermore, when performing the routine analysis of the raw milk samples, no false positive or negative results were obtained. PMID:20379812

  13. Effect of mechanical behaviour of the brachial artery on blood pressure measurement during both cuff inflation and cuff deflation.

    PubMed

    Zheng, Dingchang; Pan, Fan; Murray, Alan

    2013-10-01

    The aim of this study was to investigate the effect of different mechanical behaviour of the brachial artery on blood pressure (BP) measurements during cuff inflation and deflation. BP measurements were taken from each of 40 participants, with three repeat sessions under three randomized cuff deflation/inflation conditions. Cuff pressure was linearly deflated and inflated at a standard rate of 2-3 mmHg/s and also linearly inflated at a fast rate of 5-6 mmHg/s. Manual auscultatory systolic and diastolic BPs, and pulse pressure (SBP, DBP, PP) were measured. Automated BPs were determined from digitally recorded cuff pressures by fitting a polynomial model to the oscillometric pulse amplitudes. The BPs from cuff deflation and inflation were then compared. Repeatable measurements between sessions and between the sequential order of inflation/deflation conditions (all P > 0.1) indicated stability of arterial mechanical behaviour with repeat measurements. Comparing BPs obtained by standard inflation with those from standard deflation, manual SBP was 2.6 mmHg lower (P < 0.01), manual DBP was 1.5 mmHg higher (P < 0.01), manual PP was 4.2 mmHg lower (P < 0.001), automated DBP was 6.7 mmHg higher (P < 0.001) and automatic PP was 7.5 mmHg lower (P < 0.001). There was no statistically significant difference for any automated BPs between fast and standard cuff inflation. The statistically significant BP differences between inflation and deflation suggest different arterial mechanical behaviour between arterial opening and closing during BP measurement. We have shown that the mechanical behaviour of the brachial artery during BP measurement differs between cuff deflation and cuff inflation.

  14. Influence assessment in censored mixed-effects models using the multivariate Student’s-t distribution

    PubMed Central

    Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.

    2015-01-01

    In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871

  15. Development of the Complex General Linear Model in the Fourier Domain: Application to fMRI Multiple Input-Output Evoked Responses for Single Subjects

    PubMed Central

    Rio, Daniel E.; Rawlings, Robert R.; Woltz, Lawrence A.; Gilman, Jodi; Hommer, Daniel W.

    2013-01-01

    A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function. PMID:23840281

  16. Development of the complex general linear model in the Fourier domain: application to fMRI multiple input-output evoked responses for single subjects.

    PubMed

    Rio, Daniel E; Rawlings, Robert R; Woltz, Lawrence A; Gilman, Jodi; Hommer, Daniel W

    2013-01-01

    A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function.

  17. Application of Linear Mixed-Effects Models in Human Neuroscience Research: A Comparison with Pearson Correlation in Two Auditory Electrophysiology Studies.

    PubMed

    Koerner, Tess K; Zhang, Yang

    2017-02-27

    Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.

  18. A test of a linear model of glaucomatous structure-function loss reveals sources of variability in retinal nerve fiber and visual field measurements.

    PubMed

    Hood, Donald C; Anderson, Susan C; Wall, Michael; Raza, Ali S; Kardon, Randy H

    2009-09-01

    Retinal nerve fiber (RNFL) thickness and visual field loss data from patients with glaucoma were analyzed in the context of a model, to better understand individual variation in structure versus function. Optical coherence tomography (OCT) RNFL thickness and standard automated perimetry (SAP) visual field loss were measured in the arcuate regions of one eye of 140 patients with glaucoma and 82 normal control subjects. An estimate of within-individual (measurement) error was obtained by repeat measures made on different days within a short period in 34 patients and 22 control subjects. A linear model, previously shown to describe the general characteristics of the structure-function data, was extended to predict the variability in the data. For normal control subjects, between-individual error (individual differences) accounted for 87% and 71% of the total variance in OCT and SAP measures, respectively. SAP within-individual error increased and then decreased with increased SAP loss, whereas OCT error remained constant. The linear model with variability (LMV) described much of the variability in the data. However, 12.5% of the patients' points fell outside the 95% boundary. An examination of these points revealed factors that can contribute to the overall variability in the data. These factors include epiretinal membranes, edema, individual variation in field-to-disc mapping, and the location of blood vessels and degree to which they are included by the RNFL algorithm. The model and the partitioning of within- versus between-individual variability helped elucidate the factors contributing to the considerable variability in the structure-versus-function data.

  19. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique.

    PubMed

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young

    2014-03-01

    This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.

  20. Multitrait, Random Regression, or Simple Repeatability Model in High-Throughput Phenotyping Data Improve Genomic Prediction for Wheat Grain Yield.

    PubMed

    Sun, Jin; Rutkoski, Jessica E; Poland, Jesse A; Crossa, José; Jannink, Jean-Luc; Sorrells, Mark E

    2017-07-01

    High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat ( L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect selection for grain yield. In this study, we evaluated three statistical models, simple repeatability (SR), multitrait (MT), and random regression (RR), for the longitudinal data of secondary traits and compared the impact of the proposed models for secondary traits on their predictive abilities for grain yield. Grain yield and secondary traits, canopy temperature (CT) and normalized difference vegetation index (NDVI), were collected in five diverse environments for 557 wheat lines with available pedigree and genomic information. A two-stage analysis was applied for pedigree and genomic selection (GS). First, secondary traits were fitted by SR, MT, or RR models, separately, within each environment. Then, best linear unbiased predictions (BLUPs) of secondary traits from the above models were used in the multivariate prediction models to compare predictive abilities for grain yield. Predictive ability was substantially improved by 70%, on average, from multivariate pedigree and genomic models when including secondary traits in both training and test populations. Additionally, (i) predictive abilities slightly varied for MT, RR, or SR models in this data set, (ii) results indicated that including BLUPs of secondary traits from the MT model was the best in severe drought, and (iii) the RR model was slightly better than SR and MT models under drought environment. Copyright © 2017 Crop Science Society of America.

  1. Differences in boldness are repeatable and heritable in a long-lived marine predator

    PubMed Central

    Patrick, Samantha C; Charmantier, Anne; Weimerskirch, Henri

    2013-01-01

    Animal personalities, composed of axes of consistent individual behaviors, are widely reported and can have important fitness consequences. However, despite theoretical predictions that life-history trade-offs may cause and maintain personality differences, our understanding of the evolutionary ecology of personality remains poor, especially in long-lived species where trade-offs and senescence have been shown to be stronger. Furthermore, although much theoretical and empirical work assumes selection shapes variation in personalities, studies exploring the genetic underpinnings of personality traits are rare. Here we study one standard axis of personality, the shy–bold continuum, in a long-lived marine species, the wandering albatross from Possession Island, Crozet, by measuring the behavioral response to a human approach. Using generalized linear mixed models in a Bayesian framework, we show that boldness is highly repeatable and heritable. We also find strong differences in boldness between breeding colonies, which vary in size and density, suggesting birds are shyer in more dense colonies. These results demonstrate that in this seabird population, boldness is both heritable and repeatable and highlights the potential for ecological and evolutionary processes to shape personality traits in species with varying life-history strategies. PMID:24340172

  2. Differences in boldness are repeatable and heritable in a long-lived marine predator.

    PubMed

    Patrick, Samantha C; Charmantier, Anne; Weimerskirch, Henri

    2013-11-01

    Animal personalities, composed of axes of consistent individual behaviors, are widely reported and can have important fitness consequences. However, despite theoretical predictions that life-history trade-offs may cause and maintain personality differences, our understanding of the evolutionary ecology of personality remains poor, especially in long-lived species where trade-offs and senescence have been shown to be stronger. Furthermore, although much theoretical and empirical work assumes selection shapes variation in personalities, studies exploring the genetic underpinnings of personality traits are rare. Here we study one standard axis of personality, the shy-bold continuum, in a long-lived marine species, the wandering albatross from Possession Island, Crozet, by measuring the behavioral response to a human approach. Using generalized linear mixed models in a Bayesian framework, we show that boldness is highly repeatable and heritable. We also find strong differences in boldness between breeding colonies, which vary in size and density, suggesting birds are shyer in more dense colonies. These results demonstrate that in this seabird population, boldness is both heritable and repeatable and highlights the potential for ecological and evolutionary processes to shape personality traits in species with varying life-history strategies.

  3. Escaping the snare of chronological growth and launching a free curve alternative: general deviance as latent growth model.

    PubMed

    Wood, Phillip Karl; Jackson, Kristina M

    2013-08-01

    Researchers studying longitudinal relationships among multiple problem behaviors sometimes characterize autoregressive relationships across constructs as indicating "protective" or "launch" factors or as "developmental snares." These terms are used to indicate that initial or intermediary states of one problem behavior subsequently inhibit or promote some other problem behavior. Such models are contrasted with models of "general deviance" over time in which all problem behaviors are viewed as indicators of a common linear trajectory. When fit of the "general deviance" model is poor and fit of one or more autoregressive models is good, this is taken as support for the inhibitory or enhancing effect of one construct on another. In this paper, we argue that researchers consider competing models of growth before comparing deviance and time-bound models. Specifically, we propose use of the free curve slope intercept (FCSI) growth model (Meredith & Tisak, 1990) as a general model to typify change in a construct over time. The FCSI model includes, as nested special cases, several statistical models often used for prospective data, such as linear slope intercept models, repeated measures multivariate analysis of variance, various one-factor models, and hierarchical linear models. When considering models involving multiple constructs, we argue the construct of "general deviance" can be expressed as a single-trait multimethod model, permitting a characterization of the deviance construct over time without requiring restrictive assumptions about the form of growth over time. As an example, prospective assessments of problem behaviors from the Dunedin Multidisciplinary Health and Development Study (Silva & Stanton, 1996) are considered and contrasted with earlier analyses of Hussong, Curran, Moffitt, and Caspi (2008), which supported launch and snare hypotheses. For antisocial behavior, the FCSI model fit better than other models, including the linear chronometric growth curve model used by Hussong et al. For models including multiple constructs, a general deviance model involving a single trait and multimethod factors (or a corresponding hierarchical factor model) fit the data better than either the "snares" alternatives or the general deviance model previously considered by Hussong et al. Taken together, the analyses support the view that linkages and turning points cannot be contrasted with general deviance models absent additional experimental intervention or control.

  4. Escaping the snare of chronological growth and launching a free curve alternative: General deviance as latent growth model

    PubMed Central

    WOOD, PHILLIP KARL; JACKSON, KRISTINA M.

    2014-01-01

    Researchers studying longitudinal relationships among multiple problem behaviors sometimes characterize autoregressive relationships across constructs as indicating “protective” or “launch” factors or as “developmental snares.” These terms are used to indicate that initial or intermediary states of one problem behavior subsequently inhibit or promote some other problem behavior. Such models are contrasted with models of “general deviance” over time in which all problem behaviors are viewed as indicators of a common linear trajectory. When fit of the “general deviance” model is poor and fit of one or more autoregressive models is good, this is taken as support for the inhibitory or enhancing effect of one construct on another. In this paper, we argue that researchers consider competing models of growth before comparing deviance and time-bound models. Specifically, we propose use of the free curve slope intercept (FCSI) growth model (Meredith & Tisak, 1990) as a general model to typify change in a construct over time. The FCSI model includes, as nested special cases, several statistical models often used for prospective data, such as linear slope intercept models, repeated measures multivariate analysis of variance, various one-factor models, and hierarchical linear models. When considering models involving multiple constructs, we argue the construct of “general deviance” can be expressed as a single-trait multimethod model, permitting a characterization of the deviance construct over time without requiring restrictive assumptions about the form of growth over time. As an example, prospective assessments of problem behaviors from the Dunedin Multidisciplinary Health and Development Study (Silva & Stanton, 1996) are considered and contrasted with earlier analyses of Hussong, Curran, Moffitt, and Caspi (2008), which supported launch and snare hypotheses. For antisocial behavior, the FCSI model fit better than other models, including the linear chronometric growth curve model used by Hussong et al. For models including multiple constructs, a general deviance model involving a single trait and multimethod factors (or a corresponding hierarchical factor model) fit the data better than either the “snares” alternatives or the general deviance model previously considered by Hussong et al. Taken together, the analyses support the view that linkages and turning points cannot be contrasted with general deviance models absent additional experimental intervention or control. PMID:23880389

  5. Experimental Structural Dynamic Response of Plate Specimens Due to Sonic Loads in a Progressive Wave Tube

    NASA Technical Reports Server (NTRS)

    Betts, Juan F.

    2001-01-01

    The objective of the current study was to assess the repeatability of experiments at NASA Langley's Thermal Acoustic Fatigue Apparatus (TAFA) facility and to use these experiments to validate numerical models. Experiments show that power spectral density (PSD) curves were repeatable except at the resonant frequencies, which tended to vary between 5 Hz to 15 Hz. Results show that the thinner specimen had more variability in the resonant frequency location than the thicker sample, especially for modes higher than the first mode in the frequency range. Root Mean Square (RMS) tended to be more repeatable. The RMS behaved linearly through the SPL range of 135 to 153 dB. Standard Deviations (STDs) of the results tended to be relatively low constant up to about 147 dB. The RMS results were more repeatable than the PDS results. The STD results were less than 10% of the RMS results for both the 0.125 in (0.318 cm) and 0.062 in (0.1588 cm) thick plate. The STD of the PSD results were around 20% to 100% of the mean PSD results for non-resonant and resonant frequencies, respectively, for the 0.125 in (0.318 cm) thicker plate and between 25% to 125% of the mean PSD results, for nonresonant and resonant frequencies, respectively, for the thinner plate.

  6. Evolution of Protein Domain Repeats in Metazoa

    PubMed Central

    Schüler, Andreas; Bornberg-Bauer, Erich

    2016-01-01

    Repeats are ubiquitous elements of proteins and they play important roles for cellular function and during evolution. Repeats are, however, also notoriously difficult to capture computationally and large scale studies so far had difficulties in linking genetic causes, structural properties and evolutionary trajectories of protein repeats. Here we apply recently developed methods for repeat detection and analysis to a large dataset comprising over hundred metazoan genomes. We find that repeats in larger protein families experience generally very few insertions or deletions (indels) of repeat units but there is also a significant fraction of noteworthy volatile outliers with very high indel rates. Analysis of structural data indicates that repeats with an open structure and independently folding units are more volatile and more likely to be intrinsically disordered. Such disordered repeats are also significantly enriched in sites with a high functional potential such as linear motifs. Furthermore, the most volatile repeats have a high sequence similarity between their units. Since many volatile repeats also show signs of recombination, we conclude they are often shaped by concerted evolution. Intriguingly, many of these conserved yet volatile repeats are involved in host-pathogen interactions where they might foster fast but subtle adaptation in biological arms races. Key Words: protein evolution, domain rearrangements, protein repeats, concerted evolution. PMID:27671125

  7. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  8. A model for large amplitude oscillations of coated bubbles accounting for buckling and rupture

    NASA Astrophysics Data System (ADS)

    Marmottant, Philippe; van der Meer, Sander; Emmer, Marcia; Versluis, Michel; de Jong, Nico; Hilgenfeldt, Sascha; Lohse, Detlef

    2005-12-01

    We present a model applicable to ultrasound contrast agent bubbles that takes into account the physical properties of a lipid monolayer coating on a gas microbubble. Three parameters describe the properties of the shell: a buckling radius, the compressibility of the shell, and a break-up shell tension. The model presents an original non-linear behavior at large amplitude oscillations, termed compression-only, induced by the buckling of the lipid monolayer. This prediction is validated by experimental recordings with the high-speed camera Brandaris 128, operated at several millions of frames per second. The effect of aging, or the resultant of repeated acoustic pressure pulses on bubbles, is predicted by the model. It corrects a flaw in the shell elasticity term previously used in the dynamical equation for coated bubbles. The break-up is modeled by a critical shell tension above which gas is directly exposed to water.

  9. Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models.

    PubMed

    Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A

    2010-07-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

  10. Analysis of repeat-mediated deletions in the mitochondrial genome of Saccharomyces cerevisiae.

    PubMed

    Phadnis, Naina; Sia, Rey A; Sia, Elaine A

    2005-12-01

    Mitochondrial DNA deletions and point mutations accumulate in an age-dependent manner in mammals. The mitochondrial genome in aging humans often displays a 4977-bp deletion flanked by short direct repeats. Additionally, direct repeats flank two-thirds of the reported mitochondrial DNA deletions. The mechanism by which these deletions arise is unknown, but direct-repeat-mediated deletions involving polymerase slippage, homologous recombination, and nonhomologous end joining have been proposed. We have developed a genetic reporter to measure the rate at which direct-repeat-mediated deletions arise in the mitochondrial genome of Saccharomyces cerevisiae. Here we analyze the effect of repeat size and heterology between repeats on the rate of deletions. We find that the dependence on homology for repeat-mediated deletions is linear down to 33 bp. Heterology between repeats does not affect the deletion rate substantially. Analysis of recombination products suggests that the deletions are produced by at least two different pathways, one that generates only deletions and one that appears to generate both deletions and reciprocal products of recombination. We discuss how this reporter may be used to identify the proteins in yeast that have an impact on the generation of direct-repeat-mediated deletions.

  11. Analysis of Repeat-Mediated Deletions in the Mitochondrial Genome of Saccharomyces cerevisiae

    PubMed Central

    Phadnis, Naina; Sia, Rey A.; Sia, Elaine A.

    2005-01-01

    Mitochondrial DNA deletions and point mutations accumulate in an age-dependent manner in mammals. The mitochondrial genome in aging humans often displays a 4977-bp deletion flanked by short direct repeats. Additionally, direct repeats flank two-thirds of the reported mitochondrial DNA deletions. The mechanism by which these deletions arise is unknown, but direct-repeat-mediated deletions involving polymerase slippage, homologous recombination, and nonhomologous end joining have been proposed. We have developed a genetic reporter to measure the rate at which direct-repeat-mediated deletions arise in the mitochondrial genome of Saccharomyces cerevisiae. Here we analyze the effect of repeat size and heterology between repeats on the rate of deletions. We find that the dependence on homology for repeat-mediated deletions is linear down to 33 bp. Heterology between repeats does not affect the deletion rate substantially. Analysis of recombination products suggests that the deletions are produced by at least two different pathways, one that generates only deletions and one that appears to generate both deletions and reciprocal products of recombination. We discuss how this reporter may be used to identify the proteins in yeast that have an impact on the generation of direct-repeat-mediated deletions. PMID:16157666

  12. Comparison of three portable instruments to measure compression pressure.

    PubMed

    Partsch, H; Mosti, G

    2010-10-01

    Measurement of interface pressure between the skin and a compression device has gained practical importance not only for characterizing the efficacy of different compression products in physiological and clinical studies but also for the training of medical staff. A newly developed portable pneumatic pressure transducer (Picopress®) was compared with two established systems (Kikuhime® and SIGaT tester®) measuring linearity, variability and accuracy on a cylindrical model using a stepwise inflated sphygmomanometer as the reference. In addition the variation coefficients were measured by applying the transducers repeatedly under a blood pressure cuff on the distal lower leg of a healthy human subject with stepwise inflation. In the pressure range between 10 and 80 mmHg all three devices showed a linear association compared with the sphygmomanometer values (Pearson r>0.99). The best reproducibility (variation coefficients between 1.05-7.4%) and the highest degree of accuracy demonstrated by Bland-Altman plots was achieved with the Picopress® transducer. Repeated measurements of pressure in a human leg revealed average variation coefficients for the three devices of 4.17% (Kikuhime®), 8.52% (SIGaT®) and 2.79% (Picopress®). The results suggest that the Picopress® transducer, which also allows dynamic pressure tracing in connection with a software program and which may be left under a bandage for several days, is a reliable instrument for measuring the pressure under a compression device.

  13. Use of experimental design for optimisation of the cold plasma ICP-MS determination of lithium, aluminum and iron in soft drinks and alcoholic beverages.

    PubMed

    Bianchi, F; Careri, M; Maffini, M; Mangia, A; Mucchino, C

    2003-01-01

    A sensitive method for the simultaneous determination of (7)Li, (27)Al and (56)Fe by cold plasma ICP-MS was developed and validated. Experimental design was used to investigate the effects of torch position, torch power, lens 2 voltage, and coolant flow. Regression models and desirability functions were applied to find the experimental conditions providing the highest global sensitivity in a multi-elemental analysis. Validation was performed in terms of limits of detection (LOD), limits of quantitation (LOQ), linearity and precision. LODs were 1.4 and 159 ng L(-1) for (7)Li and (56)Fe, respectively; the highest LOD found being that for (27)Al (425 ng L(-1)). Linear ranges of 5 orders of magnitude for Li and 3 orders for Fe were statistically verified for each compound. Precision was evaluated by testing two concentration levels, and good results in terms of both intra-day repeatability and intermediate precision were obtained. RSD values lower than 4.8% at the lowest concentration level were calculated for intra-day repeatability. Commercially available soft drinks and alcoholic beverages contained in different packaging materials (TetraPack, polyethylene terephthalate (PET), commercial cans and glass) were analysed, and all the analytes were detected and quantitated. Copyright 2002 John Wiley & Sons, Ltd.

  14. Ionic pH and glucose sensors fabricated using hydrothermal ZnO nanostructures

    NASA Astrophysics Data System (ADS)

    Wang, Jyh-Liang; Yang, Po-Yu; Hsieh, Tsang-Yen; Juan, Pi-Chun

    2016-01-01

    Hydrothermally synthesized aluminum-doped ZnO (AZO) nanostructures have been adopted in extended-gate field-effect transistor (EGFET) sensors to demonstrate the sensitive and stable pH and glucose sensing characteristics of AZO-nanostructured EGFET sensors. The AZO-nanostructured EGFET sensors exhibited the following superior pH sensing characteristics: a high current sensitivity of 0.96 µA1/2/pH, a high linearity of 0.9999, less distortion of output waveforms, a small hysteresis width of 4.83 mV, good long-term repeatability, and a wide sensing range from pHs 1 to 13. The glucose sensing characteristics of AZO-nanostructured biosensors exhibited the desired sensitivity of 60.5 µA·cm-2·mM-1 and a linearity of 0.9996 up to 13.9 mM. The attractive characteristics of high sensitivity, high linearity, and repeatability of using ionic AZO-nanostructured EGFET sensors indicate their potential use as electrochemical and disposable biosensors.

  15. Analysis of anabolic steroids in urine by gas chromatography-microchip atmospheric pressure photoionization-mass spectrometry with chlorobenzene as dopant.

    PubMed

    Hintikka, Laura; Haapala, Markus; Kuuranne, Tiia; Leinonen, Antti; Kostiainen, Risto

    2013-10-18

    A gas chromatography-microchip atmospheric pressure photoionization-tandem mass spectrometry (GC-μAPPI-MS/MS) method was developed for the analysis of anabolic androgenic steroids in urine as their trimethylsilyl derivatives. The method utilizes a heated nebulizer microchip in atmospheric pressure photoionization mode (μAPPI) with chlorobenzene as dopant, which provides high ionization efficiency by producing abundant radical cations with minimal fragmentation. The performance of GC-μAPPI-MS/MS was evaluated with respect to repeatability, linearity, linear range, and limit of detection (LOD). The results confirmed the potential of the method for doping control analysis of anabolic steroids. Repeatability (RSD<10%), linearity (R(2)≥0.996) and sensitivity (LODs 0.05-0.1ng/mL) were acceptable. Quantitative performance of the method was tested and compared with that of conventional GC-electron ionization-MS, and the results were in good agreement. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Cognitive load in distributed and massed practice in virtual reality mastoidectomy simulation.

    PubMed

    Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars; Cayé-Thomasen, Per; Sørensen, Mads Sølvsten

    2016-02-01

    Cognitive load theory states that working memory is limited. This has implications for learning and suggests that reducing cognitive load (CL) could promote learning and skills acquisition. This study aims to explore the effect of repeated practice and simulator-integrated tutoring on CL in virtual reality (VR) mastoidectomy simulation. Prospective trial. Forty novice medical students performed 12 repeated virtual mastoidectomy procedures in the Visible Ear Simulator: 21 completed distributed practice with practice blocks spaced in time and 19 participants completed massed practice (all practices performed in 1 day). Participants were randomized for tutoring with the simulator-integrated tutor function. Cognitive load was estimated by measuring reaction time in a secondary task. Data were analyzed using linear mixed models for repeated measurements. The mean reaction time increased by 37% during the procedure compared with baseline, demonstrating that the procedure placed substantial cognitive demands. Repeated practice significantly lowered CL in the distributed practice group but not in massed practice group. In addition, CL was found to be further increased by 10.3% in the later and more complex stages of the procedure. The simulator-integrated tutor function did not have an impact on CL. Distributed practice decreased CL in repeated VR mastoidectomy training more consistently than was seen in massed practice. This suggests a possible effect of skills and memory consolidation occurring over time. To optimize technical skills learning, training should be organized as time-distributed practice rather than as a massed block of practice, which is common in skills-training courses. N/A. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  17. Longitudinal Assessment of Effort-Reward Imbalance and Job Strain Across Pregnancy: A Preliminary Study.

    PubMed

    Meyer, John D; Muntaner, Carles; O'Campo, Patricia; Warren, Nicolas

    2016-07-01

    To assess longitudinal changes in occupational effort-reward imbalance (ERI) and demand-control (DC) scores across pregnancy and examine associations with blood pressure (BP) during pregnancy. A pilot repeated-measures survey was administered four times to a sample of working women across pregnancy using the ERI and DC instruments. Demographic data and blood pressure measurements were collected at each interval. Growth mixture modeling was used to examine trajectories of change in occupational characteristics. Associations with BP were examined using repeated-measures linear regression models. ERI model components (effort, reward, and overcommitment) all declined across pregnancy while job control remained stable. Increasing ERI trajectory was associated with higher systolic BP (b = 8.8; p < 0.001) as was high overcommitment; declining ERI also showed a lesser association with higher BP. Associations between DC trajectories and BP were much smaller, and non-significant once controlled for overcommitment. Self-assessed efforts, rewards, and overcommitment at work decline across pregnancy in our participants, while job control remains stable. Replication in a more diverse pregnant working population is warranted to confirm these results. These preliminary data suggest that further investigation into the factors that may be linked with improved work psychosocial climate during pregnancy may be useful in order to improve pregnancy outcomes.

  18. Longitudinal assessment of effort-reward imbalance and job strain across pregnancy: A preliminary study

    PubMed Central

    Meyer, John D; Muntaner, Carles; O'Campo, Patricia; Warren, Nicolas

    2016-01-01

    Objectives To assess longitudinal changes in occupational effort-reward imbalance (ERI) and demand-control (DC) scores across pregnancy and examine associations with blood pressure (BP) during pregnancy. Methods A pilot repeated-measures survey was administered four times to a sample of working women across pregnancy using the ERI and DC instruments. Demographic data and blood pressure measurements were collected at each interval. Growth mixture modeling was used to examine trajectories of change in occupational characteristics. Associations with BP were examined using repeated-measures linear regression models. Results ERI model components (effort, reward, and overcommitment) all declined across pregnancy while job control remained stable. Increasing ERI trajectory was associated with higher systolic BP (b=8.8; p<0.001) as was high overcommitment; declining ERI also showed a smaller association with higher BP. Associations between DC trajectories and BP were much smaller, and non-significant once controlled for overcommitment. Conclusions Self-assessed efforts, rewards, and overcommitment at work decline across pregnancy in our participants, while job control remains stable. Replication in a more diverse pregnant working population is warranted to confirm these results. These preliminary data suggest that further investigation into the factors that may be linked with improved work psychosocial climate during pregnancy may be useful in order to improve pregnancy outcomes. PMID:26948376

  19. Sufficient Dimension Reduction for Longitudinally Measured Predictors

    PubMed Central

    Pfeiffer, Ruth M.; Forzani, Liliana; Bura, Efstathia

    2013-01-01

    We propose a method to combine several predictors (markers) that are measured repeatedly over time into a composite marker score without assuming a model and only requiring a mild condition on the predictor distribution. Assuming that the first and second moments of the predictors can be decomposed into a time and a marker component via a Kronecker product structure, that accommodates the longitudinal nature of the predictors, we develop first moment sufficient dimension reduction techniques to replace the original markers with linear transformations that contain sufficient information for the regression of the predictors on the outcome. These linear combinations can then be combined into a score that has better predictive performance than the score built under a general model that ignores the longitudinal structure of the data. Our methods can be applied to either continuous or categorical outcome measures. In simulations we focus on binary outcomes and show that our method outperforms existing alternatives using the AUC, the area under the receiver-operator characteristics (ROC) curve, as a summary measure of the discriminatory ability of a single continuous diagnostic marker for binary disease outcomes. PMID:22161635

  20. Solvers for the Cardiac Bidomain Equations

    PubMed Central

    Vigmond, E.J.; Weber dos Santos, R.; Prassl, A.J.; Deo, M.; Plank, G.

    2010-01-01

    The bidomain equations are widely used for the simulation of electrical activity in cardiac tissue. They are especially important for accurately modelling extracellular stimulation, as evidenced by their prediction of virtual electrode polarization before experimental verification. However, solution of the equations is computationally expensive due to the fine spatial and temporal discretization needed. This limits the size and duration of the problem which can be modeled. Regardless of the specific form into which they are cast, the computational bottleneck becomes the repeated solution of a large, linear system. The purpose of this review is to give an overview of the equations, and the methods by which they have been solved. Of particular note are recent developments in multigrid methods, which have proven to be the most efficient. PMID:17900668

  1. Estimating seasonal evapotranspiration from temporal satellite images

    USGS Publications Warehouse

    Singh, Ramesh K.; Liu, Shu-Guang; Tieszen, Larry L.; Suyker, Andrew E.; Verma, Shashi B.

    2012-01-01

    Estimating seasonal evapotranspiration (ET) has many applications in water resources planning and management, including hydrological and ecological modeling. Availability of satellite remote sensing images is limited due to repeat cycle of satellite or cloud cover. This study was conducted to determine the suitability of different methods namely cubic spline, fixed, and linear for estimating seasonal ET from temporal remotely sensed images. Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC) model in conjunction with the wet METRIC (wMETRIC), a modified version of the METRIC model, was used to estimate ET on the days of satellite overpass using eight Landsat images during the 2001 crop growing season in Midwest USA. The model-estimated daily ET was in good agreement (R2 = 0.91) with the eddy covariance tower-measured daily ET. The standard error of daily ET was 0.6 mm (20%) at three validation sites in Nebraska, USA. There was no statistically significant difference (P > 0.05) among the cubic spline, fixed, and linear methods for computing seasonal (July–December) ET from temporal ET estimates. Overall, the cubic spline resulted in the lowest standard error of 6 mm (1.67%) for seasonal ET. However, further testing of this method for multiple years is necessary to determine its suitability.

  2. Single isotope evaluation of pulmonary capillary protein leak (ARDS model) using computerized gamma scintigraphy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tatum, J.L.; Strash, A.M.; Sugerman, H.J.

    Using a canine oleic acid model, a computerized gamma scintigraphic technique was evaluated to determine 1) ability to detect pulmonary capillary protein leak in a model temporally consistent with clinical adult respiratory distress syndrome (ARDS), 2) the possibility of providing a quantitative index of leak, and 3) the feasibility of closely spaced repeat evaluations. Study animals received oleic acid (controls, n . 10; 0.05 ml/kg, n . 10; 0.10 ml/kg, n . 12; 0.15 ml/kg, n . 6) 3 hours prior to a tracer dose of technetium-99m (/sup 99/mTc) HSA. One animal in each dose group also received two repeatmore » tracer injections spaced a minimum of 45 minutes apart. Digital images were obtained with a conventional gamma camera interfaced to a dedicated medical computer. Lung: heart ratio versus time curves were generated, and a slope index was calculated for each curve. Slope index values for all doses were significantly greater than control values (P(t) less than 0.0001). Each incremental dose increase was also significantly greater than the previous dose level. Oleic acid dose versus slope index fitted a linear regression model with r . 0.94. Repeat dosing produced index values with standard deviations less than the group sample standard deviations. We feel this technique may have application in the clinical study of pulmonary permeability edema.« less

  3. Training artificial neural networks directly on the concordance index for censored data using genetic algorithms.

    PubMed

    Kalderstam, Jonas; Edén, Patrik; Bendahl, Pär-Ola; Strand, Carina; Fernö, Mårten; Ohlsson, Mattias

    2013-06-01

    The concordance index (c-index) is the standard way of evaluating the performance of prognostic models in the presence of censored data. Constructing prognostic models using artificial neural networks (ANNs) is commonly done by training on error functions which are modified versions of the c-index. Our objective was to demonstrate the capability of training directly on the c-index and to evaluate our approach compared to the Cox proportional hazards model. We constructed a prognostic model using an ensemble of ANNs which were trained using a genetic algorithm. The individual networks were trained on a non-linear artificial data set divided into a training and test set both of size 2000, where 50% of the data was censored. The ANNs were also trained on a data set consisting of 4042 patients treated for breast cancer spread over five different medical studies, 2/3 used for training and 1/3 used as a test set. A Cox model was also constructed on the same data in both cases. The two models' c-indices on the test sets were then compared. The ranking performance of the models is additionally presented visually using modified scatter plots. Cross validation on the cancer training set did not indicate any non-linear effects between the covariates. An ensemble of 30 ANNs with one hidden neuron was therefore used. The ANN model had almost the same c-index score as the Cox model (c-index=0.70 and 0.71, respectively) on the cancer test set. Both models identified similarly sized low risk groups with at most 10% false positives, 49 for the ANN model and 60 for the Cox model, but repeated bootstrap runs indicate that the difference was not significant. A significant difference could however be seen when applied on the non-linear synthetic data set. In that case the ANN ensemble managed to achieve a c-index score of 0.90 whereas the Cox model failed to distinguish itself from the random case (c-index=0.49). We have found empirical evidence that ensembles of ANN models can be optimized directly on the c-index. Comparison with a Cox model indicates that near identical performance is achieved on a real cancer data set while on a non-linear data set the ANN model is clearly superior. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    PubMed

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The proportionality of bias and number of measurement occasions should be considered when MLM-UN is used. The good news is that this proportionality can be compensated by means of large sample sizes. Accordingly, MLM-UN can be recommended even for small sample sizes for about three measurement occasions and for large sample sizes for about nine measurement occasions.

  5. Measurement of anterior tibial muscle size using real-time ultrasound imaging.

    PubMed

    Martinson, H; Stokes, M J

    1991-01-01

    Cross-sectional images of the anterior tibial muscle group were obtained using real-time ultrasound scanning in 17 normal women. From photographs taken of the images, the cross-sectional area (CSA) and two linear measurements of muscle cross-section were determined. A measurement of the shortest distance of the muscle depth was termed DS, and a measurement of the longest distance through the muscle group was termed DL. Both linear dimensions showed a positive correlation with CSA and the best correlations were obtained when the dimensions were squared or combined (DS x DL). The correlation values were: CSA vs DS2, r = 0.9; CSA vs DL2, r = 0.75 and CSA vs DS x DL, r = 0.88. An approximate value for CSA could be calculated from DS2 by the equation 2 x DS2 + 1. A shape ratio, obtained by dividing DL by DS, was consistent within the group [mean 2.1 (SD 0.2)] and characterised the muscle geometrically. The CSA of repeated scans was assessed for repeatability between-days and between-scans by analysis of variance and the coefficient of variation (CV) calculated. Areas were repeatable between-days (CV 6.5%) and between-scans (CV 3.6%). Linear dimensions of the anterior tibial muscle group reflected CSA and their potential for assessing changes in muscle size with atrophy and hypertrophy have yet to be established.

  6. Bone conduction responses of middle ear structures in Thiel embalmed heads

    NASA Astrophysics Data System (ADS)

    Arnold, Andreas; Stieger, Christof; Caversaccio, Marco; Kompis, Martin; Guignard, Jérémie

    2015-12-01

    Thiel-embalmed human whole-head specimens offer a promising alternative model for bone conduction (BC) studies of middle ear structures. In this work we present the Thiel model's linearity and stability over time as well as its possible use in the study of a fixed ossicle chain. Using laser Doppler vibrometry (LDV), the motion of the retroauricular skull, the promontory, the stapes footplate and the round window (RW) were measured. A bone-anchored hearing aid stimulated the ears with step sinus tones logarithmically spread between 0.1 and 10 kHz. Linearity of the model was verified using input levels in steps of 10 dBV. The stability of the Thiel model over time was examined with measurements repeated after hours and weeks. The influence of a cement-fixed stapes was assessed. The middle ear elements measured responded linearly in amplitude for the applied input levels (100, 32.6, and 10 mV). The variability of measurements for both short- (2 h) and long-term (4-16 weeks) repetitions in the same ear was lower than the interindividual difference. The fixation of the stapes induced a lowered RW displacement for frequencies near 750 Hz (-4 dB) and an increased displacement for frequencies above 1 kHz (max. +3.7 dB at 4 kHz). LDV assessment of BC-induced middle ear motion in Thiel heads can be performed with stable results. The vibratory RW response is affected by the fixation of the stapes, indicating a measurable effect of ossicle chain inertia on BC response in Thiel embalmed heads.

  7. Chaos theory for clinical manifestations in multiple sclerosis.

    PubMed

    Akaishi, Tetsuya; Takahashi, Toshiyuki; Nakashima, Ichiro

    2018-06-01

    Multiple sclerosis (MS) is a demyelinating disease which characteristically shows repeated relapses and remissions irregularly in the central nervous system. At present, the pathological mechanism of MS is unknown and we do not have any theories or mathematical models to explain its disseminated patterns in time and space. In this paper, we present a new theoretical model from a viewpoint of complex system with chaos model to reproduce and explain the non-linear clinical and pathological manifestations in MS. First, we adopted a discrete logistic equation with non-linear dynamics to prepare a scalar quantity for the strength of pathogenic factor at a specific location of the central nervous system at a specific time to reflect the negative feedback in immunity. Then, we set distinct minimum thresholds in the above-mentioned scalar quantity for demyelination possibly causing clinical relapses and for cerebral atrophy. With this simple model, we could theoretically reproduce all the subtypes of relapsing-remitting MS, primary progressive MS, and secondary progressive MS. With the sensitivity to initial conditions and sensitivity to minute change in parameters of the chaos theory, we could also reproduce the spatial dissemination. Such chaotic behavior could be reproduced with other similar upward-convex functions with appropriate set of initial conditions and parameters. In conclusion, by applying chaos theory to the three-dimensional scalar field of the central nervous system, we can reproduce the non-linear outcome of the clinical course and explain the unsolved disseminations in time and space of the MS patients. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Using hierarchical linear growth models to evaluate protective mechanisms that mediate science achievement

    NASA Astrophysics Data System (ADS)

    von Secker, Clare Elaine

    The study of students at risk is a major topic of science education policy and discussion. Much research has focused on describing conditions and problems associated with the statistical risk of low science achievement among individuals who are members of groups characterized by problems such as poverty and social disadvantage. But outcomes attributed to these factors do not explain the nature and extent of mechanisms that account for differences in performance among individuals at risk. There is ample theoretical and empirical evidence that demographic differences should be conceptualized as social contexts, or collections of variables, that alter the psychological significance and social demands of life events, and affect subsequent relationships between risk and resilience. The hierarchical linear growth models used in this dissertation provide greater specification of the role of social context and the protective effects of attitude, expectations, parenting practices, peer influences, and learning opportunities on science achievement. While the individual influences of these protective factors on science achievement were small, their cumulative effect was substantial. Meta-analysis conducted on the effects associated with psychological and environmental processes that mediate risk mechanisms in sixteen social contexts revealed twenty-two significant differences between groups of students. Positive attitudes, high expectations, and more intense science course-taking had positive effects on achievement of all students, although these factors were not equally protective in all social contexts. In general, effects associated with authoritative parenting and peer influences were negative, regardless of social context. An evaluation comparing the performance and stability of hierarchical linear growth models with traditional repeated measures models is included as well.

  9. Short-term Natural History of High-Risk Human Papillomavirus Infection in Mid-Adult Women Sampled Monthly (Short title: Short-term HPV Natural History in Mid-Adult Women)

    PubMed Central

    Fu, Tsung-chieh (Jane); Xi, Long Fu; Hulbert, Ayaka; Hughes, James P.; Feng, Qinghua; Schwartz, Stephen M.; Hawes, Stephen E.; Koutsky, Laura A.; Winer, Rachel L.

    2015-01-01

    Characterizing short-term HPV detection patterns and viral load may inform HPV natural history in mid-adult women. From 2011–2012, we recruited women aged 30–50 years. Women submitted monthly self-collected vaginal samples for high-risk HPV DNA testing for 6 months. Positive samples were tested for type-specific HPV DNA load by real-time PCR. HPV type-adjusted linear and Poisson regression assessed factors associated with 1) viral load at initial HPV detection and 2) repeat type-specific HPV detection. One-hundred thirty-nine women (36% of 387 women with ≥4 samples) contributed 243 type-specific HR HPV infections during the study; 54% of infections were prevalent and 46% were incident. Incident (versus prevalent) detection and past pregnancy were associated with lower viral load, whereas current smoking was associated with higher viral load. In multivariate analysis, current smoking was associated with a 40% (95%CI:5%–87%) increase in the proportion of samples that were repeatedly positive for the same HPV type, whereas incident (versus prevalent) detection status and past pregnancy were each associated with a reduction in the proportion of samples repeatedly positive (55%,95%CI:38%–67% and 26%,95%CI:10%–39%, respectively). In a separate multivariate model, each log10 increase in viral load was associated with a 10% (95%CI:4%–16%) increase in the proportion of samples repeatedly positive. Factors associated with repeat HPV detection were similar to those observed in longer-term studies, suggesting that short-term repeat detection may relate to long-term persistence. The negative associations between incident HPV detection and both viral load and repeat detection suggest that reactivation or intermittent persistence was more common than new acquisition. PMID:25976733

  10. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    PubMed Central

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-01-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements. PMID:27112127

  11. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate.

    PubMed

    Pradines, Joël R; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-26

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  12. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    NASA Astrophysics Data System (ADS)

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  13. Super-Eddington radiation transfer in soft gamma repeaters

    NASA Technical Reports Server (NTRS)

    Ulmer, Andrew

    1994-01-01

    Bursts from soft gamma repeaters (SGRs) have been shown to be super-Eddington by a factor of 1000 and have been persuasively associated with compact objects. Super-Eddington radiation transfer on the surface of a strongly magnetic (greater than or equal to 10(exp 13) G) neutron star is studied and related to the observational constraints on SGRs. In strong magnetic fields, Thompson scattering is suppressed in one polarization state, so super-Eddington fluxes can be radiated while the plasma remains in hydrostatic equilibrium. We discuss a model which offers a somewhat natural explanation for the observation that the energy spectra of bursts with varying intensity are similar. The radiation produced is found to be linearly polarized to one part in 1000 in a direction determined by the local magnetic field, and intensity variations between bursts are understood as a change in the radiating area on the source. The net polarization is inversely correlated with burst intensity. Further, it is shown that for radiation transfer calculations in limit of superstrong magnetic fields, it is sufficient to solve the radiation transfer for the low opacity state rather than the coupled equations for both. With this approximation, standard stellar atmosphere techniques are utilized to calculate the model energy spectrum.

  14. A Note on Recurring Misconceptions When Fitting Nonlinear Mixed Models.

    PubMed

    Harring, Jeffrey R; Blozis, Shelley A

    2016-01-01

    Nonlinear mixed-effects (NLME) models are used when analyzing continuous repeated measures data taken on each of a number of individuals where the focus is on characteristics of complex, nonlinear individual change. Challenges with fitting NLME models and interpreting analytic results have been well documented in the statistical literature. However, parameter estimates as well as fitted functions from NLME analyses in recent articles have been misinterpreted, suggesting the need for clarification of these issues before these misconceptions become fact. These misconceptions arise from the choice of popular estimation algorithms, namely, the first-order linearization method (FO) and Gaussian-Hermite quadrature (GHQ) methods, and how these choices necessarily lead to population-average (PA) or subject-specific (SS) interpretations of model parameters, respectively. These estimation approaches also affect the fitted function for the typical individual, the lack-of-fit of individuals' predicted trajectories, and vice versa.

  15. History of Asthma From Childhood and Arterial Stiffness in Asymptomatic Young Adults: The Bogalusa Heart Study.

    PubMed

    Sun, Dianjianyi; Li, Xiang; Heianza, Yoriko; Nisa, Hoirun; Shang, Xiaoyun; Rabito, Felicia; Kelly, Tanika; Harville, Emily; Li, Shengxu; He, Jiang; Bazzano, Lydia; Chen, Wei; Qi, Lu

    2018-05-01

    Asthma is related to various cardiovascular risk. Whether a history of asthma from childhood contributes to arterial stiffness in adulthood, a noninvasive surrogate for cardiovascular events, is unknown. Prospective analyses were performed among 1746 Bogalusa Heart Study participants aged 20 to 51 years with data on self-report asthma collected since childhood. Aorta-femoral pulse wave velocity (af-PWV, m/s) was repeatedly assessed among adults ≥aged 18 years. Generalized linear mixed models and generalized linear models were fitted for the repeated measurements of af-PWV and its changes between the last and the first measurements, respectively. After a median follow-up of 11.1 years, participants with a history of asthma from childhood had a higher af-PWV (6.78 versus 6.13; P =0.048) and a greater increase in af-PWV (8.99 versus 2.95; P =0.043) than those without asthma, adjusted for age, sex, race, smoking status, heart rate, body mass index, systolic blood pressure, lipids, and glycemia. In addition, we found significant interactions of asthma with body mass index and systolic blood pressure on af-PWV and its changes ( P for interaction <0.01). The associations of asthma with af-PWV and its changes appeared to be stronger among participants who were overweight and obese (body mass index ≥25 kg/m 2 ) or with prehypertension and hypertension (systolic blood pressure ≥120 mm Hg) compared with those with a normal body mass index or systolic blood pressure. Our findings indicate that a history of asthma from childhood is associated with higher af-PWV and greater increases in af-PWV, and such associations are stronger among young adults who are overweight or with elevated blood pressure. © 2018 American Heart Association, Inc.

  16. The effects of music on pain perception of stroke patients during upper extremity joint exercises.

    PubMed

    Kim, Soo Ji; Koh, Iljoo

    2005-01-01

    The purpose of this study was to determine the effects of music therapy on pain perception of stroke patients during upper extremity joint exercises. Ten stroke patients (1 male and 9 females) ranging in age from 61 to 73 participated in the study. Music conditions used in the study consisted of: (a) song, (b) karaoke accompaniment (same music to condition A except singers' voices), and (c) no music. Exercise movements in this study included hand, wrist, and shoulder joints. During the 8-week period music therapy sessions, subjects repeated 3 conditions according to the randomized orders and subjects rated their perceived pain on a scale immediately after each condition. The General Linear Model (GLM) Repeated Measures ANOVA revealed that there were no significant differences in pain rating across the three music conditions. However, positive affects and verbal responses, while performing upper extremity exercises with both music and karaoke accompaniment music, were observed using video observations.

  17. An overview of longitudinal data analysis methods for neurological research.

    PubMed

    Locascio, Joseph J; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.

  18. Repeatability of road pavement condition assessment based on three-dimensional analysis of linear accelerations of vehicles

    NASA Astrophysics Data System (ADS)

    Staniek, Marcin

    2018-05-01

    The article provides a discussion concerning a tool used for road pavement condition assessment based on signals of linear accelerations recorded with high sampling frequency for typical vehicles traversing the road network under real-life road traffic conditions. Specific relationships have been established for the sake of road pavement condition assessment, including identification of road sections of poor technical condition. The data thus acquired have been verified with regard to repeatability of estimated road pavement assessment indices. The data make it possible to describe the road network status against an area in which users of the system being developed move. What proves to be crucial in the assessment process is the scope of the data set based on multiple transfers within the road network.

  19. Comparison of current meters used for stream gaging

    USGS Publications Warehouse

    Fulford, Janice M.; Thibodeaux, Kirk G.; Kaehrle, William R.

    1994-01-01

    The U.S. Geological Survey (USGS) is field and laboratory testing the performance of several current meters used throughout the world for stream gaging. Meters tested include horizontal-axis current meters from Germany, the United Kingdom, and the People's Republic of China, and vertical-axis and electromagnetic current meters from the United States. Summarized are laboratory test results for meter repeatability, linearity, and response to oblique flow angles and preliminary field testing results. All current meters tested were found to under- and over-register velocities; errors usually increased as the velocity and angle of the flow increased. Repeatability and linearity of all meters tested were good. In the field tests, horizontal-axis meters, except for the two meters from the People's Republic of China, registered higher velocity than did the vertical-axis meters.

  20. Laboratory- and field-based testing as predictors of skating performance in competitive-level female ice hockey.

    PubMed

    Henriksson, Tommy; Vescovi, Jason D; Fjellman-Wiklund, Anncristine; Gilenstam, Kajsa

    2016-01-01

    The purpose of this study was to examine whether field-based and/or laboratory-based assessments are valid tools for predicting key performance characteristics of skating in competitive-level female hockey players. Cross-sectional study. Twenty-three female ice hockey players aged 15-25 years (body mass: 66.1±6.3 kg; height: 169.5±5.5 cm), with 10.6±3.2 years playing experience volunteered to participate in the study. The field-based assessments included 20 m sprint, squat jump, countermovement jump, 30-second repeated jump test, standing long jump, single-leg standing long jump, 20 m shuttle run test, isometric leg pull, one-repetition maximum bench press, and one-repetition maximum squats. The laboratory-based assessments included body composition (dual energy X-ray absorptiometry), maximal aerobic power, and isokinetic strength (Biodex). The on-ice tests included agility cornering s-turn, cone agility skate, transition agility skate, and modified repeat skate sprint. Data were analyzed using stepwise multivariate linear regression analysis. Linear regression analysis was used to establish the relationship between key performance characteristics of skating and the predictor variables. Regression models (adj R (2)) for the on-ice variables ranged from 0.244 to 0.663 for the field-based assessments and from 0.136 to 0.420 for the laboratory-based assessments. Single-leg tests were the strongest predictors for key performance characteristics of skating. Single leg standing long jump alone explained 57.1%, 38.1%, and 29.1% of the variance in skating time during transition agility skate, agility cornering s-turn, and modified repeat skate sprint, respectively. Isokinetic peak torque in the quadriceps at 90° explained 42.0% and 32.2% of the variance in skating time during agility cornering s-turn and modified repeat skate sprint, respectively. Field-based assessments, particularly single-leg tests, are an adequate substitute to more expensive and time-consuming laboratory assessments if the purpose is to gain knowledge about key performance characteristics of skating.

  1. Laboratory- and field-based testing as predictors of skating performance in competitive-level female ice hockey

    PubMed Central

    Henriksson, Tommy; Vescovi, Jason D; Fjellman-Wiklund, Anncristine; Gilenstam, Kajsa

    2016-01-01

    Objectives The purpose of this study was to examine whether field-based and/or laboratory-based assessments are valid tools for predicting key performance characteristics of skating in competitive-level female hockey players. Design Cross-sectional study. Methods Twenty-three female ice hockey players aged 15–25 years (body mass: 66.1±6.3 kg; height: 169.5±5.5 cm), with 10.6±3.2 years playing experience volunteered to participate in the study. The field-based assessments included 20 m sprint, squat jump, countermovement jump, 30-second repeated jump test, standing long jump, single-leg standing long jump, 20 m shuttle run test, isometric leg pull, one-repetition maximum bench press, and one-repetition maximum squats. The laboratory-based assessments included body composition (dual energy X-ray absorptiometry), maximal aerobic power, and isokinetic strength (Biodex). The on-ice tests included agility cornering s-turn, cone agility skate, transition agility skate, and modified repeat skate sprint. Data were analyzed using stepwise multivariate linear regression analysis. Linear regression analysis was used to establish the relationship between key performance characteristics of skating and the predictor variables. Results Regression models (adj R2) for the on-ice variables ranged from 0.244 to 0.663 for the field-based assessments and from 0.136 to 0.420 for the laboratory-based assessments. Single-leg tests were the strongest predictors for key performance characteristics of skating. Single leg standing long jump alone explained 57.1%, 38.1%, and 29.1% of the variance in skating time during transition agility skate, agility cornering s-turn, and modified repeat skate sprint, respectively. Isokinetic peak torque in the quadriceps at 90° explained 42.0% and 32.2% of the variance in skating time during agility cornering s-turn and modified repeat skate sprint, respectively. Conclusion Field-based assessments, particularly single-leg tests, are an adequate substitute to more expensive and time-consuming laboratory assessments if the purpose is to gain knowledge about key performance characteristics of skating. PMID:27574474

  2. Bounded influence function based inference in joint modelling of ordinal partial linear model and accelerated failure time model.

    PubMed

    Chakraborty, Arindom

    2016-12-01

    A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.

  3. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  4. From Three-Photon Greenberger-Horne-Zeilinger States to Ballistic Universal Quantum Computation.

    PubMed

    Gimeno-Segovia, Mercedes; Shadbolt, Pete; Browne, Dan E; Rudolph, Terry

    2015-07-10

    Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. A series of increasingly efficient proposals have shown linear-optical quantum computing to be formally scalable. However, existing schemes typically require extensive adaptive switching, which is experimentally challenging and noisy, thousands of photon sources per renormalized qubit, and/or large quantum memories for repeat-until-success strategies. Our work overcomes all these problems. We present a scheme to construct a cluster state universal for quantum computation, which uses no adaptive switching, no large memories, and which is at least an order of magnitude more resource efficient than previous passive schemes. Unlike previous proposals, it is constructed entirely from loss-detecting gates and offers a robustness to photon loss. Even without the use of an active loss-tolerant encoding, our scheme naturally tolerates a total loss rate ∼1.6% in the photons detected in the gates. This scheme uses only 3 Greenberger-Horne-Zeilinger states as a resource, together with a passive linear-optical network. We fully describe and model the iterative process of cluster generation, including photon loss and gate failure. This demonstrates that building a linear-optical quantum computer needs to be less challenging than previously thought.

  5. Application of Linear Mixed-Effects Models in Human Neuroscience Research: A Comparison with Pearson Correlation in Two Auditory Electrophysiology Studies

    PubMed Central

    Koerner, Tess K.; Zhang, Yang

    2017-01-01

    Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers. PMID:28264422

  6. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.

  7. Effects of tidal volume and methacholine on low-frequency total respiratory impedance in dogs.

    PubMed

    Lutchen, K R; Jackson, A C

    1990-05-01

    The frequency dependence of respiratory impedance (Zrs) from 0.125 to 4 Hz (Hantos et al., J. Appl. Physiol. 60: 123-132, 1986) may reflect inhomogeneous parallel time constants or the inherent viscoelastic properties of the respiratory tissues. However, studies on the lung alone or chest wall alone indicate that their impedance features are also dependent on the tidal volumes (VT) of the forced oscillations. The goals of this study were 1) to identify how total Zrs at lower frequencies measured with random noise (RN) compared with that measure with larger VT, 2) to identify how Zrs measured with RN is affected by bronchoconstriction, and 3) to identify the impact of using linear models for analyzing such data. We measured Zrs in six healthy dogs by use of a RN technique from 0.125 to 4 Hz or with a ventilator from 0.125 to 0.75 Hz with VT from 50 to 250 ml. Then methacholine was administered and the RN was repeated. Two linear models were fit to each separate set of data. Both models assume uniform airways leading to viscoelastic tissues. For healthy dogs, the respiratory resistance (Rrs) decreased with frequency, with most of the decrease occurring from 0.125 to 0.375 Hz. Significant VT dependence of Rrs was seen only at these lower frequencies, with Rrs higher as VT decreased. The respiratory compliance (Crs) was dependent on VT in a similar fashion at all frequencies, with Crs decreasing as VT decreased. Both linear models fit the data well at all VT, but the viscoelastic parameters of each model were very sensitive to VT. After methacholine, the minimum Rrs increased as did the total drop with frequency. Nevertheless the same models fit the data well, and both the airways and tissue parameters were altered after methacholine. We conclude that inferences based only on low-frequency Zrs data are problematic because of the effects of VT on such data (and subsequent linear modeling of it) and the apparent inability of such data to differentiate parallel inhomogeneities from normal viscoelastic properties of the respiratory tissues.

  8. Measurement of compartment elasticity using pressure related ultrasound: a method to identify patients with potential compartment syndrome.

    PubMed

    Sellei, R M; Hingmann, S J; Kobbe, P; Weber, C; Grice, J E; Zimmerman, F; Jeromin, S; Gansslen, A; Hildebrand, F; Pape, H C

    2015-01-01

    PURPOSE OF THE STUDY Decision-making in treatment of an acute compartment syndrome is based on clinical assessment, supported by invasive monitoring. Thus, evolving compartment syndrome may require repeated pressure measurements. In suspected cases of potential compartment syndromes clinical assessment alone seems to be unreliable. The objective of this study was to investigate the feasibility of a non-invasive application estimating whole compartmental elasticity by ultrasound, which may improve accuracy of diagnostics. MATERIAL AND METHODS In an in-vitro model, using an artificial container simulating dimensions of the human anterior tibial compartment, intracompartmental pressures (p) were raised subsequently up to 80 mm Hg by infusion of saline solution. The compartmental depth (mm) in the cross-section view was measured before and after manual probe compression (100 mm Hg) upon the surface resulting in a linear compartmental displacement (Δd). This was repeated at rising compartmental pressures. The resulting displacements were related to the corresponding intra-compartmental pressures simulated in our model. A hypothesized relationship between pressures related compartmental displacement and the elasticity at elevated compartment pressures was investigated. RESULTS With rising compartmental pressures, a non-linear, reciprocal proportional relation between the displacement (mm) and the intra-compartmental pressure (mm Hg) occurred. The Pearson's coefficient showed a high correlation (r2 = -0.960). The intraobserver reliability value kappa resulted in a statistically high reliability (κ = 0.840). The inter-observer value indicated a fair reliability (κ = 0.640). CONCLUSIONS Our model reveals that a strong correlation between compartmental strain displacements assessed by ultrasound and the intra-compartmental pressure changes occurs. Further studies are required to prove whether this assessment is transferable to human muscle tissue. Determining the complete compartmental elasticity by ultrasound enhancement, this application may improve detection of early signs of potential compartment syndrome. Key words: compartment syndrome, intra-compartmental pressure, non-invasive diagnostic, elasticity measurement, elastography.

  9. Compartment elasticity measured by pressure-related ultrasound to determine patients "at risk" for compartment syndrome: an experimental in vitro study.

    PubMed

    Sellei, Richard Martin; Hingmann, Simon Johannes; Kobbe, Philipp; Weber, Christian; Grice, John Edward; Zimmerman, Frauke; Jeromin, Sabine; Hildebrand, Frank; Pape, Hans-Christoph

    2015-01-01

    Decision-making in treatment of an acute compartment syndrome is based on clinical assessment, supported by invasive monitoring. Thus, evolving compartment syndrome may require repeated pressure measurements. In suspected cases of potential compartment syndromes clinical assessment alone seems to be unreliable. The objective of this study was to investigate the feasibility of a non-invasive application estimating whole compartmental elasticity by ultrasound, which may improve accuracy of diagnostics. In an in vitro model, using an artificial container simulating dimensions of the human anterior tibial compartment, intra-compartmental pressures (p) were raised subsequently up to 80 mmHg by infusion of saline solution. The compartmental depth (mm) in the cross-section view was measured before and after manual probe compression (100 mmHg) upon the surface resulting in a linear compartmental displacement (∆d). This was repeated at rising compartmental pressures. The resulting displacements were related to the corresponding intra-compartmental pressures simulated in our model. A hypothesized relationship between pressures related compartmental displacement and the elasticity at elevated compartment pressures was investigated. With rising compartmental pressures, a non-linear, reciprocal proportional relation between the displacement (mm) and the intra-compartmental pressure (mmHg) occurred. The Pearson coefficient showed a high correlation (r(2) = -0.960). The intra-observer reliability value kappa resulted in a statistically high reliability (κ = 0.840). The inter-observer value indicated a fair reliability (κ = 0.640). Our model reveals that a strong correlation between compartmental strain displacements assessed by ultrasound and the intra-compartmental pressure changes occurs. Further studies are required to prove whether this assessment is transferable to human muscle tissue. Determining the complete compartmental elasticity by ultrasound enhancement, this application may improve detection of early signs of potential compartment syndrome.

  10. Can the Palatability of Healthy, Satiety-Promoting Foods Increase with Repeated Exposure during Weight Loss?

    PubMed Central

    Anguah, Katherene O.-B.; Lovejoy, Jennifer C.; Craig, Bruce A.; Gehrke, Malinda M.; Palmer, Philip A.; Eichelsdoerfer, Petra E.; McCrory, Megan A.

    2017-01-01

    Repeated exposure to sugary, fatty, and salty foods often enhances their appeal. However, it is unknown if exposure influences learned palatability of foods typically promoted as part of a healthy diet. We tested whether the palatability of pulse containing foods provided during a weight loss intervention which were particularly high in fiber and low in energy density would increase with repeated exposure. At weeks 0, 3, and 6, participants (n = 42; body mass index (BMI) 31.2 ± 4.3 kg/m2) were given a test battery of 28 foods, approximately half which had been provided as part of the intervention, while the remaining half were not foods provided as part of the intervention. In addition, about half of each of the foods (provided as part or not provided as part of the intervention) contained pulses. Participants rated the taste, appearance, odor, and texture pleasantness of each food, and an overall flavor pleasantness score was calculated as the mean of these four scores. Linear mixed model analyses showed an exposure type by week interaction effect for taste, texture and overall flavor pleasantness indicating statistically significant increases in ratings of provided foods in taste and texture from weeks 0 to 3 and 0 to 6, and overall flavor from weeks 0 to 6. Repeated exposure to these foods, whether they contained pulses or not, resulted in a ~4% increase in pleasantness ratings. The long-term clinical relevance of this small increase requires further study. PMID:28231094

  11. Motor onset and diagnosis in Huntington disease using the diagnostic confidence level.

    PubMed

    Liu, Dawei; Long, Jeffrey D; Zhang, Ying; Raymond, Lynn A; Marder, Karen; Rosser, Anne; McCusker, Elizabeth A; Mills, James A; Paulsen, Jane S

    2015-12-01

    Huntington disease (HD) is a neurodegenerative disorder characterized by motor dysfunction, cognitive deterioration, and psychiatric symptoms, with progressive motor impairments being a prominent feature. The primary objectives of this study are to delineate the disease course of motor function in HD, to provide estimates of the onset of motor impairments and motor diagnosis, and to examine the effects of genetic and demographic variables on the progression of motor impairments. Data from an international multisite, longitudinal observational study of 905 prodromal HD participants with cytosine-adenine-guanine (CAG) repeats of at least 36 and with at least two visits during the followup period from 2001 to 2012 was examined for changes in the diagnostic confidence level from the Unified Huntington's Disease Rating Scale. HD progression from unimpaired to impaired motor function, as well as the progression from motor impairment to diagnosis, was associated with the linear effect of age and CAG repeat length. Specifically, for every 1-year increase in age, the risk of transition in diagnostic confidence level increased by 11% (95% CI 7-15%) and for one repeat length increase in CAG, the risk of transition in diagnostic confidence level increased by 47% (95% CI 27-69%). Findings show that CAG repeat length and age increased the likelihood of the first onset of motor impairment as well as the age at diagnosis. Results suggest that more accurate estimates of HD onset age can be obtained by incorporating the current status of diagnostic confidence level into predictive models.

  12. Pre-natal exposures to cocaine and alcohol and physical growth patterns to age 8 years

    PubMed Central

    Lumeng, Julie C.; Cabral, Howard J.; Gannon, Katherine; Heeren, Timothy; Frank, Deborah A.

    2007-01-01

    Two hundred and two primarily African American/Caribbean children (classified by maternal report and infant meconium as 38 heavier, 74 lighter and 89 not cocaine-exposed) were measured repeatedly from birth to age 8 years to assess whether there is an independent effect of prenatal cocaine exposure on physical growth patterns. Children with fetal alcohol syndrome identifiable at birth were excluded. At birth, cocaine and alcohol exposures were significantly and independently associated with lower weight, length and head circumference in cross-sectional multiple regression analyses. The relationship over time of pre-natal exposures to weight, height, and head circumference was then examined by multiple linear regression using mixed linear models including covariates: child’s gestational age, gender, ethnicity, age at assessment, current caregiver, birth mother’s use of alcohol, marijuana and tobacco during the pregnancy and pre-pregnancy weight (for child’s weight) and height (for child’s height and head circumference). The cocaine effects did not persist beyond infancy in piecewise linear mixed models, but a significant and independent negative effect of pre-natal alcohol exposure persisted for weight, height, and head circumference. Catch-up growth in cocaine-exposed infants occurred primarily by 6 months of age for all growth parameters, with some small fluctuations in growth rates in the preschool age range but no detectable differences between heavier versus unexposed nor lighter versus unexposed thereafter. PMID:17412558

  13. Deployment Testing of Flexible Composite Hinges in Bi-Material Beams

    NASA Technical Reports Server (NTRS)

    Sauder, Jonathan F.; Trease, Brian

    2016-01-01

    Composites have excellent properties for strength, thermal stability, and weight. However, they are traditionally highly rigid, and when used in deployable structures require hinges bonded to the composite material, which increases complexity and opportunities for failure. Recent research in composites has found by adding an elastomeric soft matrix, often silicone instead of an epoxy, the composite becomes flexible. This work explores the deployment repeatability of silicone matrix composite hinges which join rigid composite beams. The hinges were found to have sub-millimeter linear deployment repeatability, and sub-degree angular deployment repeatability. Also, an interesting relaxation effect was discovered, as a hinges deployment error would decrease with time.

  14. Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, M.; Xing, L.; Liu, Y.

    2018-04-01

    Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.

  15. On the origin of the photocurrent of electrochemically passivated p-InP(100) photoelectrodes.

    PubMed

    Goryachev, Andrey; Gao, Lu; van Veldhoven, René P J; Haverkort, Jos E M; Hofmann, Jan P; Hensen, Emiel J M

    2018-05-15

    III-V semiconductors such as InP are highly efficient light absorbers for photoelectrochemical (PEC) water splitting devices. Yet, their cathodic stability is limited due to photocorrosion and the measured photocurrents do not necessarily originate from H2 evolution only. We evaluated the PEC stability and activation of model p-InP(100) photocathodes upon photoelectrochemical passivation (i.e. repeated surface oxidation/reduction). The electrode was subjected to a sequence of linear potential scans with or without intermittent passivation steps (repeated passivation and continuous reduction, respectively). The evolution of H2 and PH3 gases was monitored by online electrochemical mass spectrometry (OLEMS) and the Faradaic efficiencies of these processes were determined. Repeated passivation led to an increase of the photocurrent in 0.5 M H2SO4, while continuous reduction did not affect the photocurrent of p-InP(100). Neither H2 nor PH3 formation increased to the same extent as the photocurrent during the repeated passivation treatment. Surface analysis of the spent electrodes revealed substantial roughening of the electrode surface by repeated passivation, while continuous reduction left the surface unaltered. On the other hand, photocathodic conditioning performed in 0.5 M HCl led to the expected correlation between photocurrent increase and H2 formation. Ultimately, the H2 evolution rates of the photoelectrodes in H2SO4 and HCl are comparable. The much higher photocurrent in H2SO4 is due to competing side-reactions. The results emphasize the need for a detailed evaluation of the Faradaic efficiencies of all the involved processes using a chemical-specific technique like OLEMS. Photo-OLEMS can be beneficial in the study of photoelectrochemical reactions enabling the instantaneous detection of small amounts of reaction by-products.

  16. Linear mixed-effects modeling approach to FMRI group analysis

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.

    2013-01-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. PMID:23376789

  17. Linear mixed-effects modeling approach to FMRI group analysis.

    PubMed

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. Published by Elsevier Inc.

  18. Full three-body problem in effective-field-theory models of gravity

    NASA Astrophysics Data System (ADS)

    Battista, Emmanuele; Esposito, Giampiero

    2014-10-01

    Recent work in the literature has studied the restricted three-body problem within the framework of effective-field-theory models of gravity. This paper extends such a program by considering the full three-body problem, when the Newtonian potential is replaced by a more general central potential which depends on the mutual separations of the three bodies. The general form of the equations of motion is written down, and they are studied when the interaction potential reduces to the quantum-corrected central potential considered recently in the literature. A recursive algorithm is found for solving the associated variational equations, which describe small departures from given periodic solutions of the equations of motion. Our scheme involves repeated application of a 2×2 matrix of first-order linear differential operators.

  19. Production of plasmas by long-wavelength lasers

    DOEpatents

    Dawson, J.M.

    1973-10-01

    A long-wavelength laser system for heating low-density plasma to high temperatures is described. In one embodiment, means are provided for repeatedly receiving and transmitting long-wavelength laser light in successive stages to form a laser-light beam path that repeatedly intersects with the equilibrium axis of a magnetically confined toroidal plasma column for interacting the laser light with the plasma for providing controlled thermonuclear fusion. Embodiments for heating specific linear plasmas are also provided. (Official Gazette)

  20. Structural and electron diffraction scaling of twisted graphene bilayers

    NASA Astrophysics Data System (ADS)

    Zhang, Kuan; Tadmor, Ellad B.

    2018-03-01

    Multiscale simulations are used to study the structural relaxation in twisted graphene bilayers and the associated electron diffraction patterns. The initial twist forms an incommensurate moiré pattern that relaxes to a commensurate microstructure comprised of a repeating pattern of alternating low-energy AB and BA domains surrounding a high-energy AA domain. The simulations show that the relaxation mechanism involves a localized rotation and shrinking of the AA domains that scales in two regimes with the imposed twist. For small twisting angles, the localized rotation tends to a constant; for large twist, the rotation scales linearly with it. This behavior is tied to the inverse scaling of the moiré pattern size with twist angle and is explained theoretically using a linear elasticity model. The results are validated experimentally through a simulated electron diffraction analysis of the relaxed structures. A complex electron diffraction pattern involving the appearance of weak satellite peaks is predicted for the small twist regime. This new diffraction pattern is explained using an analytical model in which the relaxation kinematics are described as an exponentially-decaying (Gaussian) rotation field centered on the AA domains. Both the angle-dependent scaling and diffraction patterns are in quantitative agreement with experimental observations. A Matlab program for extracting the Gaussian model parameters accompanies this paper.

  1. Precision linear ramp function generator

    DOEpatents

    Jatko, W.B.; McNeilly, D.R.; Thacker, L.H.

    1984-08-01

    A ramp function generator is provided which produces a precise linear ramp function which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.

  2. Precision linear ramp function generator

    DOEpatents

    Jatko, W. Bruce; McNeilly, David R.; Thacker, Louis H.

    1986-01-01

    A ramp function generator is provided which produces a precise linear ramp unction which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.

  3. Strain accumulation in bituminous binders under repeated creep-recovery loading predicted from small-amplitude oscillatory shear (SAOS) experiments

    NASA Astrophysics Data System (ADS)

    Laukkanen, Olli-Ville; Winter, H. Henning

    2017-11-01

    The creep-recovery (CR) test starts out with a period of shearing at constant stress (creep) and is followed by a period of zero-shear stress where some of the accumulated shear strain gets reversed. Linear viscoelasticity (LVE) allows one to predict the strain response to repeated creep-recovery (RCR) loading from measured small-amplitude oscillatory shear (SAOS) data. Only the relaxation and retardation time spectra of a material need to be known and these can be determined from SAOS data. In an application of the Boltzmann superposition principle (BSP), the strain response to RCR loading can be obtained as a linear superposition of the strain response to many single creep-recovery tests. SAOS and RCR data were collected for several unmodified and modified bituminous binders, and the measured and predicted RCR responses were compared. Generally good agreement was found between the measured and predicted strain accumulation under RCR loading. However, in the case of modified binders, the strain accumulation was slightly overestimated (≤20% relative error) due to the insufficient SAOS information at long relaxation times. Our analysis also demonstrates that the evolution in the strain response under RCR loading, caused by incomplete recovery, can be reasonably well predicted by the presented methodology. It was also shown that the outlined modeling framework can be used, as a first approximation, to estimate the rutting resistance of bituminous binders by predicting the values of the Multiple Stress Creep Recovery (MSCR) test parameters.

  4. Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data

    PubMed Central

    George, Brandon; Aban, Inmaculada

    2014-01-01

    Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361

  5. Extracting harmonic signal from a chaotic background with local linear model

    NASA Astrophysics Data System (ADS)

    Li, Chenlong; Su, Liyun

    2017-02-01

    In this paper, the problems of blind detection and estimation of harmonic signal in strong chaotic background are analyzed, and new methods by using local linear (LL) model are put forward. The LL model has been exhaustively researched and successfully applied for fitting and forecasting chaotic signal in many chaotic fields. We enlarge the modeling capacity substantially. Firstly, we can predict the short-term chaotic signal and obtain the fitting error based on the LL model. Then we detect the frequencies from the fitting error by periodogram, a property on the fitting error is proposed which has not been addressed before, and this property ensures that the detected frequencies are similar to that of harmonic signal. Secondly, we establish a two-layer LL model to estimate the determinate harmonic signal in strong chaotic background. To estimate this simply and effectively, we develop an efficient backfitting algorithm to select and optimize the parameters that are hard to be exhaustively searched for. In the method, based on sensitivity to initial value of chaos motion, the minimum fitting error criterion is used as the objective function to get the estimation of the parameters of the two-layer LL model. Simulation shows that the two-layer LL model and its estimation technique have appreciable flexibility to model the determinate harmonic signal in different chaotic backgrounds (Lorenz, Henon and Mackey-Glass (M-G) equations). Specifically, the harmonic signal can be extracted well with low SNR and the developed background algorithm satisfies the condition of convergence in repeated 3-5 times.

  6. Longitudinal changes in bone lead levels: the VA Normative Aging Study.

    PubMed

    Wilker, Elissa; Korrick, Susan; Nie, Linda H; Sparrow, David; Vokonas, Pantel; Coull, Brent; Wright, Robert O; Schwartz, Joel; Hu, Howard

    2011-08-01

    Bone lead is a cumulative measure of lead exposure that can also be remobilized. We examined repeated measures of bone lead over 11 years to characterize long-term changes and identify predictors of tibia and patella lead stores in an elderly male population. Lead was measured every 3 to 5 years by k-x-ray fluorescence and mixed-effect models with random effects were used to evaluate change over time. A total of 554 participants provided up to four bone lead measurements. Final models predicted a -1.4% annual decline (95% CI: -2.2 to -0.7) for tibia lead and piecewise linear model for patella with an initial decline of 5.1% per year (95% CI: -6.2 to -3.9) during the first 4.6 years but no significant change thereafter (-0.4% [95% CI: -2.4 to 1.7]). These results suggest that bone lead half-life may be longer than previously reported.

  7. Automatic stage identification of Drosophila egg chamber based on DAPI images

    PubMed Central

    Jia, Dongyu; Xu, Qiuping; Xie, Qian; Mio, Washington; Deng, Wu-Min

    2016-01-01

    The Drosophila egg chamber, whose development is divided into 14 stages, is a well-established model for developmental biology. However, visual stage determination can be a tedious, subjective and time-consuming task prone to errors. Our study presents an objective, reliable and repeatable automated method for quantifying cell features and classifying egg chamber stages based on DAPI images. The proposed approach is composed of two steps: 1) a feature extraction step and 2) a statistical modeling step. The egg chamber features used are egg chamber size, oocyte size, egg chamber ratio and distribution of follicle cells. Methods for determining the on-site of the polytene stage and centripetal migration are also discussed. The statistical model uses linear and ordinal regression to explore the stage-feature relationships and classify egg chamber stages. Combined with machine learning, our method has great potential to enable discovery of hidden developmental mechanisms. PMID:26732176

  8. An Overview of Longitudinal Data Analysis Methods for Neurological Research

    PubMed Central

    Locascio, Joseph J.; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825

  9. Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm

    NASA Astrophysics Data System (ADS)

    Foroutan, M.; Zimbelman, J. R.

    2017-09-01

    Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.

  10. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE PAGES

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...

    2017-09-20

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  11. Nonlinear predictive control of a boiler-turbine unit: A state-space approach with successive on-line model linearisation and quadratic optimisation.

    PubMed

    Ławryńczuk, Maciej

    2017-03-01

    This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  13. Classification and regression tree analysis vs. multivariable linear and logistic regression methods as statistical tools for studying haemophilia.

    PubMed

    Henrard, S; Speybroeck, N; Hermans, C

    2015-11-01

    Haemophilia is a rare genetic haemorrhagic disease characterized by partial or complete deficiency of coagulation factor VIII, for haemophilia A, or IX, for haemophilia B. As in any other medical research domain, the field of haemophilia research is increasingly concerned with finding factors associated with binary or continuous outcomes through multivariable models. Traditional models include multiple logistic regressions, for binary outcomes, and multiple linear regressions for continuous outcomes. Yet these regression models are at times difficult to implement, especially for non-statisticians, and can be difficult to interpret. The present paper sought to didactically explain how, why, and when to use classification and regression tree (CART) analysis for haemophilia research. The CART method is non-parametric and non-linear, based on the repeated partitioning of a sample into subgroups based on a certain criterion. Breiman developed this method in 1984. Classification trees (CTs) are used to analyse categorical outcomes and regression trees (RTs) to analyse continuous ones. The CART methodology has become increasingly popular in the medical field, yet only a few examples of studies using this methodology specifically in haemophilia have to date been published. Two examples using CART analysis and previously published in this field are didactically explained in details. There is increasing interest in using CART analysis in the health domain, primarily due to its ease of implementation, use, and interpretation, thus facilitating medical decision-making. This method should be promoted for analysing continuous or categorical outcomes in haemophilia, when applicable. © 2015 John Wiley & Sons Ltd.

  14. Reliable quantification of BOLD fMRI cerebrovascular reactivity despite poor breath-hold performance.

    PubMed

    Bright, Molly G; Murphy, Kevin

    2013-12-01

    Cerebrovascular reactivity (CVR) can be mapped using BOLD fMRI to provide a clinical insight into vascular health that can be used to diagnose cerebrovascular disease. Breath-holds are a readily accessible method for producing the required arterial CO2 increases but their implementation into clinical studies is limited by concerns that patients will demonstrate highly variable performance of breath-hold challenges. This study assesses the repeatability of CVR measurements despite poor task performance, to determine if and how robust results could be achieved with breath-holds in patients. Twelve healthy volunteers were scanned at 3 T. Six functional scans were acquired, each consisting of 6 breath-hold challenges (10, 15, or 20 s duration) interleaved with periods of paced breathing. These scans simulated the varying breath-hold consistency and ability levels that may occur in patient data. Uniform ramps, time-scaled ramps, and end-tidal CO2 data were used as regressors in a general linear model in order to measure CVR at the grey matter, regional, and voxelwise level. The intraclass correlation coefficient (ICC) quantified the repeatability of the CVR measurement for each breath-hold regressor type and scale of interest across the variable task performances. The ramp regressors did not fully account for variability in breath-hold performance and did not achieve acceptable repeatability (ICC<0.4) in several regions analysed. In contrast, the end-tidal CO2 regressors resulted in "excellent" repeatability (ICC=0.82) in the average grey matter data, and resulted in acceptable repeatability in all smaller regions tested (ICC>0.4). Further analysis of intra-subject CVR variability across the brain (ICCspatial and voxelwise correlation) supported the use of end-tidal CO2 data to extract robust whole-brain CVR maps, despite variability in breath-hold performance. We conclude that the incorporation of end-tidal CO2 monitoring into scanning enables robust, repeatable measurement of CVR that makes breath-hold challenges suitable for routine clinical practice. © 2013.

  15. Linear Array Ultrasonic Testing Of A Thick Concrete Specimens For Non-Destructive Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clayton, Dwight A.; Khazanovich, Lev; Zammerachi, Mattia

    The University of Minnesota and Oak Ridge National Laboratory are collaborating on the design and construction of a concrete specimen with sufficient reinforcement density and cross-sectional size to represent a light water reactor (LWR) containment wall with various defects. The preliminary analysis of the collected data using extended synthetic aperture focussin technique (SAFT) reconstruction indicated a great potential of the ultrasound array technology for locating relatively shallow distresses. However, the resolution and reliability of the analysis is inversely proportional to the defect depth and the amount of reinforcement between the measurement point and the defect location. The objective of thismore » round of testing is to evaluate repeatability of the obtained reconstructions from measurements with different frequencies as well as to examine the effect of the duration of the sending ultrasound signal on the resulting reconstructions. Two series of testing are performed in this study. The objective of the first series is to evaluate repeatability of the measurements and resulting reconstructed images. The measurements use three center frequencies. Five measurements are performed at each location with and without lifting the device. The analysis of the collected data suggested that a linear array ultrasound system can produce reliably repeatable reconstructions using 50 kHz signals for relatively shallow depths (less than 0.5 m). However, for reconstructions at the greater depths the use of lower frequency and/or signal filtering to reduce the effect of signal noise may be required. The objective of the second series of testing is to obtain measurements with various impulse signal durations. The entire grid on the smooth surface is tested with four different various impulse signal durations. An analysis of the resulting extended SAFT reconstructions suggested that Kirchhoff-based migration leads to easier interpreting reconstructions when shorter duration impulse is used. Longer duration impulses may provide useful information for model-based reconstructions.« less

  16. 40 CFR 1066.20 - Units of measure and overview of calculations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Applicability and General Provisions § 1066.20 Units of..., repeatability, linearity, or noise specification. See 40 CFR 1065.1001 for the definition of tolerance. In this...

  17. Newborn length predicts early infant linear growth retardation and disproportionately high weight gain in a low-income population.

    PubMed

    Berngard, Samuel Clark; Berngard, Jennifer Bishop; Krebs, Nancy F; Garcés, Ana; Miller, Leland V; Westcott, Jamie; Wright, Linda L; Kindem, Mark; Hambidge, K Michael

    2013-12-01

    Stunting is prevalent by the age of 6 months in the indigenous population of the Western Highlands of Guatemala. The objective of this study was to determine the time course and predictors of linear growth failure and weight-for-age in early infancy. One hundred and forty eight term newborns had measurements of length and weight in their homes, repeated at 3 and 6 months. Maternal measurements were also obtained. Mean ± SD length-for-age Z-score (LAZ) declined from newborn -1.0 ± 1.01 to -2.20 ± 1.05 and -2.26 ± 1.01 at 3 and 6 months respectively. Stunting rates for newborn, 3 and 6 months were 47%, 53% and 56% respectively. A multiple regression model (R(2) = 0.64) demonstrated that the major predictor of LAZ at 3 months was newborn LAZ with the other predictors being newborn weight-for-age Z-score (WAZ), gender and maternal education∗maternal age interaction. Because WAZ remained essentially constant and LAZ declined during the same period, weight-for-length Z-score (WLZ) increased from -0.44 to +1.28 from birth to 3 months. The more severe the linear growth failure, the greater WAZ was in proportion to the LAZ. The primary conclusion is that impaired fetal linear growth is the major predictor of early infant linear growth failure indicating that prevention needs to start with maternal interventions. © 2013.

  18. Enhanced Sensitivity of Wireless Chemical Sensor Based on Love Wave Mode

    NASA Astrophysics Data System (ADS)

    Wang, Wen; Oh, Haekwan; Lee, Keekeun; Yang, Sangsik

    2008-09-01

    A 440 MHz wireless and passive Love-wave-based chemical sensor was developed for CO2 detection. The developed device was composed of a reflective delay line patterned on 41° YX LiNbO3 piezoelectric substrate, a poly(methyl methacrylate) (PMMA) waveguide layer, and Teflon AF 2400 sensitive film. A theoretical model is presented to describe wave propagation in Love wave devices with large piezoelectricity and to allow the design of an optimized structure. In wireless device testing using a network analyzer, infusion of CO2 into the testing chamber induced large phase shifts of the reflection peaks owing to the interaction between the sensing film and the test gas (CO2). Good linearity and repeatability were observed at CO2 concentrations of 0-350 ppm. The obtained sensitivity from the Love wave device was approximately 7.07° ppm-1. The gas response properties of the fabricated Love-wave sensor in terms of linearity and sensitivity were provided, and a comparison to surface acoustic wave devices was also discussed.

  19. Coherent Change Detection: Theoretical Description and Experimental Results

    DTIC Science & Technology

    2006-08-01

    Elementary Linear Algebra With Applications. John Wiley and sons, 1987. 49. J. Lee, K. W. Hoppel, and A. R. Miller, “Intensity and phase statistics of...kx, ky, kz = 0). The nature of the image recovered by the PFA may be ascertained by considering a scene consisting of an elementary point scatter...registered image pair estimate any dominant relative linear phase term between the primary image and the resampled repeat pass image and remove this

  20. WFPC2 CYCLE 15 Intflat Linearity Check and Filter Rotation Anomaly Monitor

    NASA Astrophysics Data System (ADS)

    Gonzaga, Shireen

    2006-07-01

    Intflat observations will be taken to provide a linearity check: the linearity test consists of a series of intflats in F555W, in each gain and each shutter. A combination of intflats, visflats, and earthflats will be used to check the repeatability of filter wheel motions. {Intflat sequences tied to decons, visits 1-18 in prop 10363, have been moved to the cycle 15 decon proposal xxxx for easier scheduling.} Note: long-exposure WFPC2 intflats must be scheduled during ACS anneals to prevent stray light from the WFPC2 lamps from contaminating long ACS external exposures.

  1. Mechanical unfolding of an ankyrin repeat protein.

    PubMed

    Serquera, David; Lee, Whasil; Settanni, Giovanni; Marszalek, Piotr E; Paci, Emanuele; Itzhaki, Laura S

    2010-04-07

    Ankryin repeat proteins comprise tandem arrays of a 33-residue, predominantly alpha-helical motif that stacks roughly linearly to produce elongated and superhelical structures. They function as scaffolds mediating a diverse range of protein-protein interactions, and some have been proposed to play a role in mechanical signal transduction processes in the cell. Here we use atomic force microscopy and molecular-dynamics simulations to investigate the natural 7-ankyrin repeat protein gankyrin. We find that gankyrin unfolds under force via multiple distinct pathways. The reactions do not proceed in a cooperative manner, nor do they always involve fully stepwise unfolding of one repeat at a time. The peeling away of half an ankyrin repeat, or one or more ankyrin repeats, occurs at low forces; however, intermediate species are formed that are resistant to high forces, and the simulations indicate that in some instances they are stabilized by nonnative interactions. The unfolding of individual ankyrin repeats generates a refolding force, a feature that may be more easily detected in these proteins than in globular proteins because the refolding of a repeat involves a short contraction distance and incurs a low entropic cost. We discuss the origins of the differences between the force- and chemical-induced unfolding pathways of ankyrin repeat proteins, as well as the differences between the mechanics of natural occurring ankyrin repeat proteins and those of designed consensus ankyin repeat and globular proteins. Copyright (c) 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  2. Linearized traveling wave amplifier with hard limiter characteristics

    NASA Technical Reports Server (NTRS)

    Kosmahl, H. G. (Inventor)

    1986-01-01

    A dynamic velocity taper is provided for a traveling wave tube with increased linearity to avoid intermodulation of signals being amplified. In a traveling wave tube, the slow wave structure is a helix including a sever. A dynamic velocity taper is provided by gradually reducing the spacing between the repeating elements of the slow wave structure which are the windings of the helix. The reduction which takes place coincides with the ouput point of helix. The spacing between the repeating elements of the slow wave structure is ideally at an exponential rate because the curve increases the point of maximum efficiency and power, at an exponential rate. A coupled cavity traveling wave tube having cavities is shown. The space between apertured discs is gradually reduced from 0.1% to 5% at an exponential rate. Output power (or efficiency) versus input power for a commercial tube is shown.

  3. Retinal nerve fiber layer measurements by scanning laser polarimetry with enhanced corneal compensation in healthy subjects.

    PubMed

    Rao, Harsha L; Venkatesh, Chirravuri R; Vidyasagar, Kelli; Yadav, Ravi K; Addepalli, Uday K; Jude, Aarthi; Senthil, Sirisha; Garudadri, Chandra S

    2014-12-01

    To evaluate the (i) effects of biological (age and axial length) and instrument-related [typical scan score (TSS) and corneal birefringence] parameters on the retinal nerve fiber layer (RNFL) measurements and (ii) repeatability of RNFL measurements with the enhanced corneal compensation (ECC) protocol of scanning laser polarimetry (SLP) in healthy subjects. In a cross-sectional study, 140 eyes of 73 healthy subjects underwent RNFL imaging with the ECC protocol of SLP. Linear mixed modeling methods were used to evaluate the effects of age, axial length, TSS, and corneal birefringence on RNFL measurements. One randomly selected eye of 48 subjects from the cohort underwent 3 serial scans during the same session to determine the repeatability. Age significantly influenced all RNFL measurements. RNFL measurements decreased by 1 µm for every decade increase in age. TSS affected the overall average RNFL measurement (β=-0.62, P=0.003), whereas residual anterior segment retardance affected the superior quadrant measurement (β=1.14, P=0.01). Axial length and corneal birefringence measurements did not influence RNFL measurements. Repeatability, as assessed by the coefficient of variation, ranged between 1.7% for the overall average RNFL measurement and 11.4% for th nerve fiber indicator. Age significantly affected all RNFL measurements with the ECC protocol of SLP, whereas TSS and residual anterior segment retardance affected the overall average and the superior average RNFL measurements, respectively. Axial length and corneal birefringence measurements did not influence any RNFL measurements. RNFL measurements had good intrasession repeatability. These results are important while evaluating the change in structural measurements over time in glaucoma patients.

  4. Prolonged Repeated Acupuncture Stimulation Induces Habituation Effects in Pain-Related Brain Areas: An fMRI Study

    PubMed Central

    Li, Chuanfu; Yang, Jun; Park, Kyungmo; Wu, Hongli; Hu, Sheng; Zhang, Wei; Bu, Junjie; Xu, Chunsheng; Qiu, Bensheng; Zhang, Xiaochu

    2014-01-01

    Most previous studies of brain responses to acupuncture were designed to investigate the acupuncture instant effect while the cumulative effect that should be more important in clinical practice has seldom been discussed. In this study, the neural basis of the acupuncture cumulative effect was analyzed. For this experiment, forty healthy volunteers were recruited, in which more than 40 minutes of repeated acupuncture stimulation was implemented at acupoint Zhusanli (ST36). Three runs of acupuncture fMRI datasets were acquired, with each run consisting of two blocks of acupuncture stimulation. Besides general linear model (GLM) analysis, the cumulative effects of acupuncture were analyzed with analysis of covariance (ANCOVA) to find the association between the brain response and the cumulative duration of acupuncture stimulation in each stimulation block. The experimental results showed that the brain response in the initial stage was the strongest although the brain response to acupuncture was time-variant. In particular, the brain areas that were activated in the first block and the brain areas that demonstrated cumulative effects in the course of repeated acupuncture stimulation overlapped in the pain-related areas, including the bilateral middle cingulate cortex, the bilateral paracentral lobule, the SII, and the right thalamus. Furthermore, the cumulative effects demonstrated bimodal characteristics, i.e. the brain response was positive at the beginning, and became negative at the end. It was suggested that the cumulative effect of repeated acupuncture stimulation was consistent with the characteristic of habituation effects. This finding may explain the neurophysiologic mechanism underlying acupuncture analgesia. PMID:24821143

  5. Repeat HIV-testing is associated with an increase in behavioral risk among men who have sex with men: a cohort study.

    PubMed

    Hoenigl, Martin; Anderson, Christy M; Green, Nella; Mehta, Sanjay R; Smith, Davey M; Little, Susan J

    2015-09-11

    The Center for Disease Control and Prevention recommends that high-risk groups, like sexually active men who have sex with men (MSM), receive HIV testing and counseling at least annually. The objective of this study was to investigate the relationship between voluntary repeat HIV testing and sexual risk behavior in MSM receiving rapid serologic and nucleic acid amplification testing. We performed a cohort study to analyze reported risk behavior among MSM receiving the "Early Test", a community-based, confidential acute and early HIV infection screening program in San Diego, California, between April 2008 and July 2014. The study included 8,935 MSM receiving 17,333 "Early Tests". A previously published risk behavior score for HIV acquisition in MSM (i.e. Menza score) was chosen as an outcome to assess associations between risk behaviors and number of repeated tests. At baseline, repeat-testers (n = 3,202) reported more male partners and more condomless receptive anal intercourse (CRAI) when compared to single-testers (n = 5,405, all P <0.001). In 2,457 repeat testers there was a strong association observed between repeated HIV tests obtained and increased risk behavior, with number of male partners, CRAI with high risk persons, non-injection stimulant drug use, and sexually transmitted infections all increasing between the first and last test. There was also a linear increase of risk (i.e. high Menza scores) with number of tests up to the 17th test. In the multivariable mixed effects model, more HIV tests (OR = 1.18 for each doubling of the number of tests, P <0.001) and younger age (OR = 0.95 per 5-year increase, P = 0.006) had significant associations with high Menza scores. This study found that the highest risk individuals for acquiring HIV (e.g. candidates for antiretroviral pre-exposure prophylaxis) can be identified by their testing patterns. Future studies should delineate causation versus association to improve prevention messages delivered to repeat testers during HIV testing and counseling sessions.

  6. A refined methodology for modeling volume quantification performance in CT

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Wilson, Joshua; Samei, Ehsan

    2014-03-01

    The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.

  7. Visuo‐manual tracking: does intermittent control with aperiodic sampling explain linear power and non‐linear remnant without sensorimotor noise?

    PubMed Central

    Gawthrop, Peter J.; Lakie, Martin; Loram, Ian D.

    2017-01-01

    Key points A human controlling an external system is described most easily and conventionally as linearly and continuously translating sensory input to motor output, with the inevitable output remnant, non‐linearly related to the input, attributed to sensorimotor noise.Recent experiments show sustained manual tracking involves repeated refractoriness (insensitivity to sensory information for a certain duration), with the temporary 200–500 ms periods of irresponsiveness to sensory input making the control process intrinsically non‐linear.This evidence calls for re‐examination of the extent to which random sensorimotor noise is required to explain the non‐linear remnant.This investigation of manual tracking shows how the full motor output (linear component and remnant) can be explained mechanistically by aperiodic sampling triggered by prediction error thresholds.Whereas broadband physiological noise is general to all processes, aperiodic sampling is associated with sensorimotor decision making within specific frontal, striatal and parietal networks; we conclude that manual tracking utilises such slow serial decision making pathways up to several times per second. Abstract The human operator is described adequately by linear translation of sensory input to motor output. Motor output also always includes a non‐linear remnant resulting from random sensorimotor noise from multiple sources, and non‐linear input transformations, for example thresholds or refractory periods. Recent evidence showed that manual tracking incurs substantial, serial, refractoriness (insensitivity to sensory information of 350 and 550 ms for 1st and 2nd order systems respectively). Our two questions are: (i) What are the comparative merits of explaining the non‐linear remnant using noise or non‐linear transformations? (ii) Can non‐linear transformations represent serial motor decision making within the sensorimotor feedback loop intrinsic to tracking? Twelve participants (instructed to act in three prescribed ways) manually controlled two systems (1st and 2nd order) subject to a periodic multi‐sine disturbance. Joystick power was analysed using three models, continuous‐linear‐control (CC), continuous‐linear‐control with calculated noise spectrum (CCN), and intermittent control with aperiodic sampling triggered by prediction error thresholds (IC). Unlike the linear mechanism, the intermittent control mechanism explained the majority of total power (linear and remnant) (77–87% vs. 8–48%, IC vs. CC). Between conditions, IC used thresholds and distributions of open loop intervals consistent with, respectively, instructions and previous measured, model independent values; whereas CCN required changes in noise spectrum deviating from broadband, signal dependent noise. We conclude that manual tracking uses open loop predictive control with aperiodic sampling. Because aperiodic sampling is inherent to serial decision making within previously identified, specific frontal, striatal and parietal networks we suggest that these structures are intimately involved in visuo‐manual tracking. PMID:28833126

  8. A prospective microstructure imaging study in mixed-martial artists using geometric measures and diffusion tensor imaging: methods and findings

    PubMed Central

    Mayer, Andrew R.; Ling, Josef M.; Dodd, Andrew B.; Meier, Timothy B.; Hanlon, Faith M.; Klimaj, Stefan D.

    2018-01-01

    Although diffusion magnetic resonance imaging (dMRI) has been widely used to characterize the effects of repetitive mild traumatic brain injury (rmTBI), to date no studies have investigated how novel geometric models of microstructure relate to more typical diffusion tensor imaging (DTI) sequences. Moreover, few studies have evaluated the sensitivity of different registration pipelines (non-linear, linear and tract-based spatial statistics) for detecting dMRI abnormalities in clinical populations. Results from single-subject analyses in healthy controls (HC) indicated a strong negative relationship between fractional anisotropy (FA) and orientation dispersion index (ODI) in both white and gray matter. Equally important, only moderate relationships existed between all other estimates of free/intracellular water volume fractions and more traditional DTI metrics (FA, mean, axial and radial diffusivity). These findings suggest that geometric measures provide differential information about the cellular microstructure relative to traditional DTI measures. Results also suggest greater sensitivity for non-linear registration pipelines that maximize the anatomical information available in T1-weighted images. Clinically, rmTBI resulted in a pattern of decreased FA and increased ODI, largely overlapping in space, in conjunction with increased intracellular and free water fractions, highlighting the potential role of edema following repeated head trauma. In summary, current results suggest that geometric models of diffusion can provide relatively unique information regarding potential mechanisms of pathology that contribute to long-term neurological damage. PMID:27071950

  9. A prospective microstructure imaging study in mixed-martial artists using geometric measures and diffusion tensor imaging: methods and findings.

    PubMed

    Mayer, Andrew R; Ling, Josef M; Dodd, Andrew B; Meier, Timothy B; Hanlon, Faith M; Klimaj, Stefan D

    2017-06-01

    Although diffusion magnetic resonance imaging (dMRI) has been widely used to characterize the effects of repetitive mild traumatic brain injury (rmTBI), to date no studies have investigated how novel geometric models of microstructure relate to more typical diffusion tensor imaging (DTI) sequences. Moreover, few studies have evaluated the sensitivity of different registration pipelines (non-linear, linear and tract-based spatial statistics) for detecting dMRI abnormalities in clinical populations. Results from single-subject analyses in healthy controls (HC) indicated a strong negative relationship between fractional anisotropy (FA) and orientation dispersion index (ODI) in both white and gray matter. Equally important, only moderate relationships existed between all other estimates of free/intracellular water volume fractions and more traditional DTI metrics (FA, mean, axial and radial diffusivity). These findings suggest that geometric measures provide differential information about the cellular microstructure relative to traditional DTI measures. Results also suggest greater sensitivity for non-linear registration pipelines that maximize the anatomical information available in T 1 -weighted images. Clinically, rmTBI resulted in a pattern of decreased FA and increased ODI, largely overlapping in space, in conjunction with increased intracellular and free water fractions, highlighting the potential role of edema following repeated head trauma. In summary, current results suggest that geometric models of diffusion can provide relatively unique information regarding potential mechanisms of pathology that contribute to long-term neurological damage.

  10. A comparison of three random effects approaches to analyze repeated bounded outcome scores with an application in a stroke revalidation study.

    PubMed

    Molas, Marek; Lesaffre, Emmanuel

    2008-12-30

    Discrete bounded outcome scores (BOS), i.e. discrete measurements that are restricted on a finite interval, often occur in practice. Examples are compliance measures, quality of life measures, etc. In this paper we examine three related random effects approaches to analyze longitudinal studies with a BOS as response: (1) a linear mixed effects (LM) model applied to a logistic transformed modified BOS; (2) a model assuming that the discrete BOS is a coarsened version of a latent random variable, which after a logistic-normal transformation, satisfies an LM model; and (3) a random effects probit model. We consider also the extension whereby the variability of the BOS is allowed to depend on covariates. The methods are contrasted using a simulation study and on a longitudinal project, which documents stroke rehabilitation in four European countries using measures of motor and functional recovery. Copyright 2008 John Wiley & Sons, Ltd.

  11. Investigating the Metallicity–Mixing-length Relation

    NASA Astrophysics Data System (ADS)

    Viani, Lucas S.; Basu, Sarbani; Joel Ong J., M.; Bonaca, Ana; Chaplin, William J.

    2018-05-01

    Stellar models typically use the mixing-length approximation as a way to implement convection in a simplified manner. While conventionally the value of the mixing-length parameter, α, used is the solar-calibrated value, many studies have shown that other values of α are needed to properly model stars. This uncertainty in the value of the mixing-length parameter is a major source of error in stellar models and isochrones. Using asteroseismic data, we determine the value of the mixing-length parameter required to properly model a set of about 450 stars ranging in log g, {T}eff}, and [{Fe}/{{H}}]. The relationship between the value of α required and the properties of the star is then investigated. For Eddington atmosphere, non-diffusion models, we find that the value of α can be approximated by a linear model, in the form of α /{α }ȯ =5.426{--}0.101 {log}(g)-1.071 {log}({T}eff}) +0.437([{Fe}/{{H}}]). This process is repeated using a variety of model physics, as well as compared with previous studies and results from 3D convective simulations.

  12. Temporal patterns of variable relationships in person-oriented research: longitudinal models of configural frequency analysis.

    PubMed

    von Eye, Alexander; Mun, Eun Young; Bogat, G Anne

    2008-03-01

    This article reviews the premises of configural frequency analysis (CFA), including methods of choosing significance tests and base models, as well as protecting alpha, and discusses why CFA is a useful approach when conducting longitudinal person-oriented research. CFA operates at the manifest variable level. Longitudinal CFA seeks to identify those temporal patterns that stand out as more frequent (CFA types) or less frequent (CFA antitypes) than expected with reference to a base model. A base model that has been used frequently in CFA applications, prediction CFA, and a new base model, auto-association CFA, are discussed for analysis of cross-classifications of longitudinal data. The former base model takes the associations among predictors and among criteria into account. The latter takes the auto-associations among repeatedly observed variables into account. Application examples of each are given using data from a longitudinal study of domestic violence. It is demonstrated that CFA results are not redundant with results from log-linear modeling or multinomial regression and that, of these approaches, CFA shows particular utility when conducting person-oriented research.

  13. Slipped-strand mispairing at noncontiguous repeats in Poecilia reticulata: a model for minisatellite birth.

    PubMed Central

    Taylor, J S; Breden, F

    2000-01-01

    The standard slipped-strand mispairing (SSM) model for the formation of variable number tandem repeats (VNTRs) proposes that a few tandem repeats, produced by chance mutations, provide the "raw material" for VNTR expansion. However, this model is unlikely to explain the formation of VNTRs with long motifs (e.g., minisatellites), because the likelihood of a tandem repeat forming by chance decreases rapidly as the length of the repeat motif increases. Phylogenetic reconstruction of the birth of a mitochondrial (mt) DNA minisatellite in guppies suggests that VNTRs with long motifs can form as a consequence of SSM at noncontiguous repeats. VNTRs formed in this manner have motifs longer than the noncontiguous repeat originally formed by chance and are flanked by one unit of the original, noncontiguous repeat. SSM at noncontiguous repeats can therefore explain the birth of VNTRs with long motifs and the "imperfect" or "short direct" repeats frequently observed adjacent to both mtDNA and nuclear VNTRs. PMID:10880490

  14. Improved statistical analysis of moclobemide dose effects on panic disorder treatment.

    PubMed

    Ross, Donald C; Klein, Donald F; Uhlenhuth, E H

    2010-04-01

    Clinical trials with several measurement occasions are frequently analyzed using only the last available observation as the dependent variable [last observation carried forward (LOCF)]. This ignores intermediate observations. We reanalyze, with complete data methods, a clinical trial previously reported using LOCF, comparing placebo and five dosage levels of moclobemide in the treatment of outpatients with panic disorder to illustrate the superiority of methods using repeated observations. We initially analyzed unprovoked and situational, major and minor attacks as the four dependent variables, by repeated measures maximum likelihood methods. The model included parameters for linear and curvilinear time trends and regression of measures during treatment on baseline measures. Significance tests using this method take into account the structure of the error covariance matrix. This makes the sphericity assumption irrelevant. Missingness is assumed to be unrelated to eventual outcome and the residuals are assumed to have a multivariate normal distribution. No differential treatment effects for limited attacks were found. Since similar results were obtained for both types of major attack, data for the two types of major attack were combined. Overall downward linear and negatively accelerated downward curvilinear time trends were found. There were highly significant treatment differences in the regression slopes of scores during treatment on baseline observations. For major attacks, all treatment groups improved over time. The flatter regression slopes, obtained with higher doses, indicated that higher doses result in uniformly lower attack rates regardless of initial severity. Lower doses do not lower the attack rate of severely ill patients to those achieved in the less severely ill. The clinical implication is that more severe patients require higher doses to attain best benefit. Further, the significance levels obtained by LOCF analyses were only in the 0.05-0.01 range, while significance levels of <0.00001 were obtained by these repeated measures analyses indicating increased power. The greater sensitivity to treatment effect of this complete data method is illustrated. To increase power, it is often recommended to increase sample size. However, this is often impractical since a major proportion of the cost per subject is due to the initial evaluation. Increasing the number of repeated observations increases power economically and also allows detailed longitudinal trajectory analyses.

  15. Verification of intensity modulated profiles using a pixel segmented liquid-filled linear array.

    PubMed

    Pardo, J; Roselló, J V; Sánchez-Doblado, F; Gómez, F

    2006-06-07

    A liquid isooctane (C8H18) filled ionization chamber linear array developed for radiotherapy quality assurance, consisting of 128 pixels (each of them with a 1.7 mm pitch), has been used to acquire profiles of several intensity modulated fields. The results were compared with film measurements using the gamma test. The comparisons show a very good matching, even in high gradient dose regions. The volume-averaging effect of the pixels is negligible and the spatial resolution is enough to verify these regions. However, some mismatches between the detectors have been found in regions where low-energy scattered photons significantly contribute to the total dose. These differences are not very important (in fact, the measurements of both detectors are in agreement using the gamma test with tolerances of 3% and 3 mm in most of those regions), and may be associated with the film energy dependence. In addition, the linear array repeatability (0.27% one standard deviation) is much better than the film one ( approximately 3%). The good repeatability, small pixel size and high spatial resolution make the detector ideal for the real time profile verification of high gradient beam profiles like those present in intensity modulated radiation therapy and radiosurgery.

  16. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    PubMed

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  17. Rapid and simultaneous analysis of five alkaloids in four parts of Coptidis Rhizoma by near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Jintao, Xue; Yufei, Liu; Liming, Ye; Chunyan, Li; Quanwei, Yang; Weiying, Wang; Yun, Jing; Minxiang, Zhang; Peng, Li

    2018-01-01

    Near-Infrared Spectroscopy (NIRS) was first used to develop a method for rapid and simultaneous determination of 5 active alkaloids (berberine, coptisine, palmatine, epiberberine and jatrorrhizine) in 4 parts (rhizome, fibrous root, stem and leaf) of Coptidis Rhizoma. A total of 100 samples from 4 main places of origin were collected and studied. With HPLC analysis values as calibration reference, the quantitative analysis of 5 marker components was performed by two different modeling methods, partial least-squares (PLS) regression as linear regression and artificial neural networks (ANN) as non-linear regression. The results indicated that the 2 types of models established were robust, accurate and repeatable for five active alkaloids, and the ANN models was more suitable for the determination of berberine, coptisine and palmatine while the PLS model was more suitable for the analysis of epiberberine and jatrorrhizine. The performance of the optimal models was achieved as follows: the correlation coefficient (R) for berberine, coptisine, palmatine, epiberberine and jatrorrhizine was 0.9958, 0.9956, 0.9959, 0.9963 and 0.9923, respectively; the root mean square error of validation (RMSEP) was 0.5093, 0.0578, 0.0443, 0.0563 and 0.0090, respectively. Furthermore, for the comprehensive exploitation and utilization of plant resource of Coptidis Rhizoma, the established NIR models were used to analysis the content of 5 active alkaloids in 4 parts of Coptidis Rhizoma and 4 main origin of places. This work demonstrated that NIRS may be a promising method as routine screening for off-line fast analysis or on-line quality assessment of traditional Chinese medicine (TCM).

  18. Retuning the DARHT Axis-II Linear Induction Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl August Jr.; Schulze, Martin E.; Carlson, Carl A.

    2015-03-31

    The Dual-Axis Radiographic Hydrodynamic Test (DARHT) facility uses bremsstrahlung radiation source spots produced by the focused electron beams from two linear induction accelerators (LIAs) to radiograph large hydrodynamic experiments driven by high explosives. The Axis-II 1.7-kA, 1600-ns beam pulse is transported through the LIA by the magnetic field from 91 solenoids as it is accelerated to ~16.5 MeV. The magnetic field produced by the solenoids and 80 steering dipole pairs for a given set of magnet currents is known as the “tune” of the accelerator [1]. From June, 2013 through September, 2014 a single tune was used. This tune wasmore » based on measurements of LIA element positions made over several years [2], and models of solenoidal fields derived from actual field measurements [3] [4]. Based on the focus scan technique, changing the tune of the accelerator and downstream transport had no effect on the beam emittance, to within the uncertainties of the measurement. Beam sizes appear to have been overestimated in all prior measurements because of the low magnification of the imaging system. This has resulted in overestimates of emittance by ~50%. The high magnification imaging should be repeated with the old tune for direct comparison with the new tune. High magnification imaging with the new accelerator tune should be repeated after retuning the downstream to produce a much more symmetric beam to reduce the uncertainty of this measurement. Thus, these results should be considered preliminary until we can effect a new tune to produce symmetric spots at our imaging station, for high magnification images.« less

  19. Properties of axially loaded implant-abutment assemblies using digital holographic interferometry analysis.

    PubMed

    Brozović, Juraj; Demoli, Nazif; Farkaš, Nina; Sušić, Mato; Alar, Zeljko; Gabrić Pandurić, Dragana

    2014-03-01

    The aim of this study was to (i) obtain the force-related interferometric patterns of loaded dental implant-abutment assemblies differing in diameter and brand using digital holographic interferometry (DHI) and (ii) determine the influence of implant diameter on the extent of load-induced implant deformation by quantifying and comparing the obtained interferometric data. Experiments included five implant brands (Ankylos, Astra Tech, blueSKY, MIS and Straumann), each represented by a narrow and a wide diameter implant connected to a corresponding abutment. A quasi-Fourier setup with a 25mW helium-neon laser was used for interferometric measurements in the cervical 5mm of the implants. Holograms were recorded in two conditions per measurement: a 10N preloaded and a measuring-force loaded assembly, resulting with an interferogram. This procedure was repeated throughout the whole process of incremental axial loading, from 20N to 120N. Each measurement series was repeated three times for each assembly, with complete dismantling of the implant-loading device in between. Additional software analyses calculated deformation data. Deformations were presented as mean values±standard deviations. Statistical analysis was performed using linear mixed effects modeling in R's lme4 package. Implants exhibited linear deformation patterns. The wide diameter group had lower mean deformation values than the narrow diameter group. The diameter significantly affected the deformation throughout loading sessions. This study gained in vitro implant performance data, compared the deformations in implant bodies and numerically stated the biomechanical benefits of wider diameter implants. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  20. Validating a new device for measuring tear evaporation rates.

    PubMed

    Rohit, Athira; Ehrmann, Klaus; Naduvilath, Thomas; Willcox, Mark; Stapleton, Fiona

    2014-01-01

    To calibrate and validate a commercially available dermatology instrument to measure tear evaporation rate of contact lens wearers. A dermatology instrument was modified by attaching a swim goggle cup such that the cup sealed around the eye socket. Results for the unmodified instrument are dependent on probe area and enclosed volume. Calibration curves were established using a model eye, to account for individual variations in chamber volume and exposed area. Fifteen participants were recruited and the study included a contact lens wear and a no contact lens wear stage. Day and diurnal variation of the measurements were assessed by taking the measurement three times a day over 2 days. The coefficient of repeatability of the measurement was calculated and a linear mixed model assessed the influence of humidity, temperature, contact lens wear, day and diurnal variations on tear evaporation rate. The associations between variables were assessed using Pearson correlation coefficient. Absolute evaporation rates with and without contact lens wear were calculated based on the new calibration. The measurements were most repeatable during the evening with no lens wear (COR = 49 g m⁻² h) and least repeatable during the evening with contact lens wear (COR = 93 g m⁻² h). Humidity (p = 0.007), and contact lens wear (p < 0.01), significantly affected the tear evaporation rate. However, temperature (p = 0.54) diurnal variation (p = 0.85) and different days (p = 0.65) had no significant effect after controlling for humidity. Tear evaporation rates can be measured using a modified dermatology instrument. Measurements were higher and more variable with lens wear consistent with previous literature. Control of environmental conditions is important as a higher humidity results in a reduced evaporation rate. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  1. SU-D-207B-07: Development of a CT-Radiomics Based Early Response Prediction Model During Delivery of Chemoradiation Therapy for Pancreatic Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klawikowski, S; Christian, J; Schott, D

    Purpose: Pilot study developing a CT-texture based model for early assessment of treatment response during the delivery of chemoradiation therapy (CRT) for pancreatic cancer. Methods: Daily CT data acquired for 24 pancreatic head cancer patients using CT-on-rails, during the routine CT-guided CRT delivery with a radiation dose of 50.4 Gy in 28 fractions, were analyzed. The pancreas head was contoured on each daily CT. Texture analysis was performed within the pancreas head contour using a research tool (IBEX). Over 1300 texture metrics including: grey level co-occurrence, run-length, histogram, neighborhood intensity difference, and geometrical shape features were calculated for each dailymore » CT. Metric-trend information was established by finding the best fit of either a linear, quadratic, or exponential function for each metric value verses accumulated dose. Thus all the daily CT texture information was consolidated into a best-fit trend type for a given patient and texture metric. Linear correlation was performed between the patient histological response vector (good, medium, poor) and all combinations of 23 patient subgroups (statistical jackknife) determining which metrics were most correlated to response and repeatedly reliable across most patients. Control correlations against CT scanner, reconstruction kernel, and gated/nongated CT images were also calculated. Euclidean distance measure was used to group/sort patient vectors based on the data of these trend-response metrics. Results: We found four specific trend-metrics (Gray Level Coocurence Matrix311-1InverseDiffMomentNorm, Gray Level Coocurence Matrix311-1InverseDiffNorm, Gray Level Coocurence Matrix311-1 Homogeneity2, and Intensity Direct Local StdMean) that were highly correlated with patient response and repeatedly reliable. Our four trend-metric model successfully ordered our pilot response dataset (p=0.00070). We found no significant correlation to our control parameters: gating (p=0.7717), scanner (p=0.9741), and kernel (p=0.8586). Conclusion: We have successfully created a CT-texture based early treatment response prediction model using the CTs acquired during the delivery of chemoradiation therapy for pancreatic cancer. Future testing is required to validate the model with more patient data.« less

  2. Predicting within-herd prevalence of infection with bovine leukemia virus using bulk-tank milk antibody levels.

    PubMed

    Nekouei, Omid; Stryhn, Henrik; VanLeeuwen, John; Kelton, David; Hanna, Paul; Keefe, Greg

    2015-11-01

    Enzootic bovine leukosis (EBL) is an economically important infection of dairy cattle caused by bovine leukemia virus (BLV). Estimating the prevalence of BLV within dairy herds is a fundamental step towards pursuing efficient control programs. The objectives of this study were: (1) to determine the prevalence of BLV infection at the herd level using a bulk-tank milk (BTM) antibody ELISA in the Maritime region of Canada (3 provinces); and (2) to develop appropriate statistical models for predicting within-herd prevalence of BLV infection using BTM antibody ELISA titers. During 2013, three monthly BTM samples were collected from all dairy farms in the Maritime region of Canada (n=623) and tested for BLV milk antibodies using a commercial indirect ELISA. Based on the mean of the 3 BTM titers, 15 strata of herds (5 per province) were defined. From each stratum, 6 herds were randomly selected for a total of 90 farms. Within every selected herd, an additional BTM sample was taken (round 4), approximately 2 months after the third round. On the same day of BTM sampling, all cows that contributed milk to the fourth BTM sample were individually tested for BLV milk antibodies (n=6111) to estimate the true within-herd prevalence for the 90 herds. The association between true within-herd prevalence of BLV and means of various combinations of the BTM titers was assessed using linear regression models, adjusting for the stratified random sampling design. Herd level prevalence of BLV in the region was 90.8%. In the individual testing, 30.4% of cows were positive. True within-herd prevalences ranged from 0 to 94%. All linear regression models were able to predict the true within-herd prevalence of BLV reasonably well (R(2)>0.69). Predictions from the models were particularly accurate for low-to-medium spectrums of the BTM titers. In general, as a greater number of the four repeated BTM titers were incorporated in the models, narrower confidence intervals around the prediction lines were achieved. The model including all 4 BTM tests as the predictor had the best fit, although the models using 2 and 3 BTM tests provided similar results to 4 repeated tests. Therefore, testing two or three BTM samples with approximately two-month intervals would provide relatively precise estimates for the potential number of infected cows in a herd. The developed models in this study could be applied to control and eradication programs for BLV as cost-effective tools. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Population pharmacokinetics-pharmacodynamics of vedolizumab in patients with ulcerative colitis and Crohn's disease.

    PubMed

    Rosario, M; Dirks, N L; Gastonguay, M R; Fasanmade, A A; Wyant, T; Parikh, A; Sandborn, W J; Feagan, B G; Reinisch, W; Fox, I

    2015-07-01

    Vedolizumab, an anti-α(4)β(7) integrin monoclonal antibody (mAb), is indicated for treating patients with moderately to severely active ulcerative colitis (UC) and Crohn's disease (CD). As higher therapeutic mAb concentrations have been associated with greater efficacy in inflammatory bowel disease, understanding determinants of vedolizumab clearance may help to optimise dosing. To characterise vedolizumab pharmacokinetics in patients with UC and CD, to identify clinically relevant determinants of vedolizumab clearance, and to describe the pharmacokinetic-pharmacodynamic relationship using population modelling. Data from a phase 1 healthy volunteer study, a phase 2 UC study, and 3 phase 3 UC/CD studies were included. Population pharmacokinetic analysis for repeated measures was conducted using nonlinear mixed effects modelling. Results from the base model, developed using extensive phase 1 and 2 data, were used to develop the full covariate model, which was fit to sparse phase 3 data. Vedolizumab pharmacokinetics was described by a 2-compartment model with parallel linear and nonlinear elimination. Using reference covariate values, linear elimination half-life of vedolizumab was 25.5 days; linear clearance (CL(L)) was 0.159 L/day for UC and 0.155 L/day for CD; central compartment volume of distribution (V(c)) was 3.19 L; and peripheral compartment volume of distribution was 1.66 L. Interindividual variabilities (%CV) were 35% for CLL and 19% for V(c); residual variance was 24%. Only extreme albumin and body weight values were identified as potential clinically important predictors of CL(L). Population pharmacokinetic parameters were similar in patients with moderately to severely active UC and CD. This analysis supports use of vedolizumab fixed dosing in these patients. Clinicaltrials.gov Identifiers: NCT01177228; NCT00783718 (GEMINI 1); NCT00783692 (GEMINI 2); NCT01224171 (GEMINI 3). © 2015 Takeda Pharmaceuticals International Co published by John Wiley & Sons Ltd.

  4. Contrast Enhanced Maximum Intensity Projection Ultrasound Imaging for Assessing Angiogenesis in Murine Glioma and Breast Tumor Models: A Comparative Study

    PubMed Central

    Forsberg, Flemming; Ro, Raymond J.; Fox, Traci B; Liu, Ji-Bin; Chiou, See-Ying; Potoczek, Magdalena; Goldberg, Barry B

    2010-01-01

    The purpose of this study was to prospectively compare noninvasive, quantitative measures of vascularity obtained from 4 contrast enhanced ultrasound (US) techniques to 4 invasive immunohistochemical markers of tumor angiogenesis in a large group of murine xenografts. Glioma (C6) or breast cancer (NMU) cells were implanted in 144 rats. The contrast agent Optison (GE Healthcare, Princeton, NJ) was injected in a tail vein (dose: 0.4ml/kg). Power Doppler imaging (PDI), pulse-subtraction harmonic imaging (PSHI), flash-echo imaging (FEI), and Microflow imaging (MFI; a technique creating maximum intensity projection images over time) was performed with an Aplio scanner (Toshiba America Medical Systems, Tustin, CA) and a 7.5 MHz linear array. Fractional tumor neovascularity was calculated from digital clips of contrast US, while the relative area stained was calculated from specimens. Results were compared using a factorial, repeated measures ANOVA, linear regression and z-tests. The tortuous morphology of tumor neovessels was visualized better with MFI than with the other US modes. Cell line, implantation method and contrast US imaging technique were significant parameters in the ANOVA model (p<0.05). The strongest correlation determined by linear regression in the C6 model was between PSHI and percent area stained with CD31 (r=0.37, p<0.0001). In the NMU model the strongest correlation was between FEI and COX-2 (r=0.46, p<0.0001). There were no statistically significant differences between correlations obtained with the various US methods (p>0.05). In conclusion, the largest study of contrast US of murine xenografts to date has been conducted and quantitative contrast enhanced US measures of tumor neovascularity in glioma and breast cancer xenograft models appear to provide a noninvasive marker for angiogenesis; although the best method for monitoring angiogenesis was not conclusively established. PMID:21144542

  5. Axial displacement of external and internal implant-abutment connection evaluated by linear mixed model analysis.

    PubMed

    Seol, Hyon-Woo; Heo, Seong-Joo; Koak, Jai-Young; Kim, Seong-Kyun; Kim, Shin-Koo

    2015-01-01

    To analyze the axial displacement of external and internal implant-abutment connection after cyclic loading. Three groups of external abutments (Ext group), an internal tapered one-piece-type abutment (Int-1 group), and an internal tapered two-piece-type abutment (Int-2 group) were prepared. Cyclic loading was applied to implant-abutment assemblies at 150 N with a frequency of 3 Hz. The amount of axial displacement, the Periotest values (PTVs), and the removal torque values(RTVs) were measured. Both a repeated measures analysis of variance and pattern analysis based on the linear mixed model were used for statistical analysis. Scanning electron microscopy (SEM) was used to evaluate the surface of the implant-abutment connection. The mean axial displacements after 1,000,000 cycles were 0.6 μm in the Ext group, 3.7 μm in the Int-1 group, and 9.0 μm in the Int-2 group. Pattern analysis revealed a breakpoint at 171 cycles. The Ext group showed no declining pattern, and the Int-1 group showed no declining pattern after the breakpoint (171 cycles). However, the Int-2 group experienced continuous axial displacement. After cyclic loading, the PTV decreased in the Int-2 group, and the RTV decreased in all groups. SEM imaging revealed surface wear in all groups. Axial displacement and surface wear occurred in all groups. The PTVs remained stable, but the RTVs decreased after cyclic loading. Based on linear mixed model analysis, the Ext and Int-1 groups' axial displacements plateaued after little cyclic loading. The Int-2 group's rate of axial displacement slowed after 100,000 cycles.

  6. Mathematical model of alternative mechanism of telomere length maintenance

    NASA Astrophysics Data System (ADS)

    Kollár, Richard; Bod'ová, Katarína; Nosek, Jozef; Tomáška, L'ubomír

    2014-03-01

    Biopolymer length regulation is a complex process that involves a large number of biological, chemical, and physical subprocesses acting simultaneously across multiple spatial and temporal scales. An illustrative example important for genomic stability is the length regulation of telomeres—nucleoprotein structures at the ends of linear chromosomes consisting of tandemly repeated DNA sequences and a specialized set of proteins. Maintenance of telomeres is often facilitated by the enzyme telomerase but, particularly in telomerase-free systems, the maintenance of chromosomal termini depends on alternative lengthening of telomeres (ALT) mechanisms mediated by recombination. Various linear and circular DNA structures were identified to participate in ALT, however, dynamics of the whole process is still poorly understood. We propose a chemical kinetics model of ALT with kinetic rates systematically derived from the biophysics of DNA diffusion and looping. The reaction system is reduced to a coagulation-fragmentation system by quasi-steady-state approximation. The detailed treatment of kinetic rates yields explicit formulas for expected size distributions of telomeres that demonstrate the key role played by the J factor, a quantitative measure of bending of polymers. The results are in agreement with experimental data and point out interesting phenomena: an appearance of very long telomeric circles if the total telomere density exceeds a critical value (excess mass) and a nonlinear response of the telomere size distributions to the amount of telomeric DNA in the system. The results can be of general importance for understanding dynamics of telomeres in telomerase-independent systems as this mode of telomere maintenance is similar to the situation in tumor cells lacking telomerase activity. Furthermore, due to its universality, the model may also serve as a prototype of an interaction between linear and circular DNA structures in various settings.

  7. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  8. Parallel But Not Equivalent: Challenges and Solutions for Repeated Assessment of Cognition over Time

    PubMed Central

    Gross, Alden L.; Inouye, Sharon K.; Rebok, George W.; Brandt, Jason; Crane, Paul K.; Parisi, Jeanine M.; Tommet, Doug; Bandeen-Roche, Karen; Carlson, Michelle C.; Jones, Richard N.

    2013-01-01

    Objective Analyses of individual differences in change may be unintentionally biased when versions of a neuropsychological test used at different follow-ups are not of equivalent difficulty. This study’s objective was to compare mean, linear, and equipercentile equating methods and demonstrate their utility in longitudinal research. Study Design and Setting The Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE, N=1,401) study is a longitudinal randomized trial of cognitive training. The Alzheimer’s Disease Neuroimaging Initiative (ADNI, n=819) is an observational cohort study. Nonequivalent alternate versions of the Auditory Verbal Learning Test (AVLT) were administered in both studies. Results Using visual displays, raw and mean-equated AVLT scores in both studies showed obvious nonlinear trajectories in reference groups that should show minimal change, poor equivalence over time (ps≤0.001), and raw scores demonstrated poor fits in models of within-person change (RMSEAs>0.12). Linear and equipercentile equating produced more similar means in reference groups (ps≥0.09) and performed better in growth models (RMSEAs<0.05). Conclusion Equipercentile equating is the preferred equating method because it accommodates tests more difficult than a reference test at different percentiles of performance and performs well in models of within-person trajectory. The method has broad applications in both clinical and research settings to enhance the ability to use nonequivalent test forms. PMID:22540849

  9. Visuo-manual tracking: does intermittent control with aperiodic sampling explain linear power and non-linear remnant without sensorimotor noise?

    PubMed

    Gollee, Henrik; Gawthrop, Peter J; Lakie, Martin; Loram, Ian D

    2017-11-01

    A human controlling an external system is described most easily and conventionally as linearly and continuously translating sensory input to motor output, with the inevitable output remnant, non-linearly related to the input, attributed to sensorimotor noise. Recent experiments show sustained manual tracking involves repeated refractoriness (insensitivity to sensory information for a certain duration), with the temporary 200-500 ms periods of irresponsiveness to sensory input making the control process intrinsically non-linear. This evidence calls for re-examination of the extent to which random sensorimotor noise is required to explain the non-linear remnant. This investigation of manual tracking shows how the full motor output (linear component and remnant) can be explained mechanistically by aperiodic sampling triggered by prediction error thresholds. Whereas broadband physiological noise is general to all processes, aperiodic sampling is associated with sensorimotor decision making within specific frontal, striatal and parietal networks; we conclude that manual tracking utilises such slow serial decision making pathways up to several times per second. The human operator is described adequately by linear translation of sensory input to motor output. Motor output also always includes a non-linear remnant resulting from random sensorimotor noise from multiple sources, and non-linear input transformations, for example thresholds or refractory periods. Recent evidence showed that manual tracking incurs substantial, serial, refractoriness (insensitivity to sensory information of 350 and 550 ms for 1st and 2nd order systems respectively). Our two questions are: (i) What are the comparative merits of explaining the non-linear remnant using noise or non-linear transformations? (ii) Can non-linear transformations represent serial motor decision making within the sensorimotor feedback loop intrinsic to tracking? Twelve participants (instructed to act in three prescribed ways) manually controlled two systems (1st and 2nd order) subject to a periodic multi-sine disturbance. Joystick power was analysed using three models, continuous-linear-control (CC), continuous-linear-control with calculated noise spectrum (CCN), and intermittent control with aperiodic sampling triggered by prediction error thresholds (IC). Unlike the linear mechanism, the intermittent control mechanism explained the majority of total power (linear and remnant) (77-87% vs. 8-48%, IC vs. CC). Between conditions, IC used thresholds and distributions of open loop intervals consistent with, respectively, instructions and previous measured, model independent values; whereas CCN required changes in noise spectrum deviating from broadband, signal dependent noise. We conclude that manual tracking uses open loop predictive control with aperiodic sampling. Because aperiodic sampling is inherent to serial decision making within previously identified, specific frontal, striatal and parietal networks we suggest that these structures are intimately involved in visuo-manual tracking. © 2017 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  10. Selecting a separable parametric spatiotemporal covariance structure for longitudinal imaging data.

    PubMed

    George, Brandon; Aban, Inmaculada

    2015-01-15

    Longitudinal imaging studies allow great insight into how the structure and function of a subject's internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures and the spatial from the outcomes of interest being observed at multiple points in a patient's body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on types I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be performed in practice, as well as how covariance structure choice can change inferences about fixed effects. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Effects of long-term low-level radiation exposure after the Chernobyl catastrophe on immunoglobulins in children residing in contaminated areas: prospective and cross-sectional studies

    PubMed Central

    2014-01-01

    Background After the Chernobyl nuclear incident in 1986, children in the Narodichesky region, located 80 km west of the Chernobyl Power Plant, were exposed to 137Cesium (137Cs). Little is known about the effects of chronic low-level radiation on humoral immune responses in children residing in contaminated areas. Methods In four different approaches we investigated the effect of residential 137Cs exposure on immunoglobulins A, G, M, and specific immunoglobulin E in children. In a dynamic cohort (1993–1998) we included 617 children providing 2,407 repeated measurements; 421 and 523 children in two cross-sectional samples (1997–1998 and 2008–2010, respectively); and 25 participants in a small longitudinal cohort (1997–2010). All medical exams, blood collections, and analyses were conducted by the same team. We used mixed linear models to analyze repeated measurements in cohorts and general linear regression models for cross-sectional studies. Results Residential soil contamination in 2008 was highly correlated with the individual body burden of 137Cs. Serum IgG and IgM concentrations increased between 1993 and 1998. Children with higher 137Cs soil exposure had lower serum IgG levels, which, however, increased in the small cohort assessed between 1997 and 2010. Children within the fourth quintile of 137Cs soil exposure (266–310 kBq/m2) had higher IgM serum concentrations between 1993 and 1998 but these declined between 1997 and 2010. IgA remained stable with median 137Cs exposures related to higher IgA levels, which was corroborated in the cross-sectional study of 2008–2010. Specific IgE against indoor allergens was detected less often in children with higher 137Cs exposure. Conclusions Our findings show radiation-related alterations of immunoglobulins which by themselves do not constitute adverse health effects. Further investigations are necessary to understand how these changes affect health status. PMID:24886042

  12. SU-E-T-638: Evaluation and Comparison of Landauer Microstar (OSLD) Readers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souri, S; Ahmed, Y; Cao, Y

    2014-06-15

    Purpose: To evaluate and compare characteristic performance of a new Landauer nanodot Reader with the previous model. Methods: In order to calibrate and test the reader, a set of nanodots were irradiated using a Varian Truebeam Linac. Solid water slabs and bolus were used in the process of irradiation. Calibration sets of nanodots were irradiated for radiation dose ranges: 0 to 10 and 20 to 1000 cGy, using 6MV photons. Additionally, three sets of nanodots were each irradiated using 6MV, 10MV and 15MV beams. For each beam energy, and selected dose in the range of 3 to 1000 cGy, amore » pair of nanodots was irradiated and three readings were obtained with both readers. Results: The analysis shows that for 3 photon beam energies and selected ranges of dose, the calculated absorbed dose agrees well with the expected value. The results illustrate that the new Microstar II reader is a highly consistent system and that the repeated readings provide results with a reasonably small standard deviation. For all practical purposes, the response of system is linear for all radiation beam energies. Conclusion: The Microstar II nanodot reader is consistent, accurate, and reliable. The new hardware design and corresponding software contain several advantages over the previous model. The automatic repeat reading mechanism, that helps improve reproducibility and reduce processing time, and the smaller unit size that renders ease of transport, are two of such features. Present study shows that for high dose ranges a polynomial calibration equation provides more consistent results. A 3rd order polynomial calibration curve was used to analyze the readings of dosimeters exposed to high dose range radiation. It was observed that the results show less error compared to those calculated by using linear calibration curves, as provided by Landauer system software for all dose ranges.« less

  13. Study of resonant modes of the harbour of Siracusa, Italy, and of the effects of breakwaters in case of a tsunami event.

    NASA Astrophysics Data System (ADS)

    Pagnoni, Gianluca; Tinti, Stefano

    2016-04-01

    The eastern coast of Sicily has been hit by many historical tsunamis of local and remote origin. This zone and in particular Siracusa, as test site, was selected in the FP7 European project ASTARTE (Assessment, Strategy And Risk Reduction for Tsunamis in Europe - FP7-ENV2013 6.4-3, Grant 603839). According to the project goals, in this work oscillations modes of the Siracusa harbour were analysed with focus on the typical tsunami periods range, and on the protecting effects of breakwaters by using linear and non-linear simulation models. The city of Siracusa is located north of the homonymous gulf and has two harbours, called "Piccolo" (small) and "Grande" (grand) that are connected through a narrow channel. The harbour "Piccolo" is the object of this work. It is located at the end of a bay facing east and bordered on the south by the peninsula of Ortigia and on the north by the mainland. The basin has an area of approximately 100,000 m2 and is very shallow with an average depth of 2.5 m. It is protected by two breakwaters reducing its mouth to only 40 m width. This study was carried out using the numerical code UBO-TSUFD that solves linear and non-linear shallow-water equations on a high-resolution 2m x 2m regular grid. Resonant modes were searched by sinusoidal forcing on the open boundary with periods in a range from about 60 s to 1600 s covering the typical tsunami spectrum. The work was divided into three phases. First we studied the natural resonance frequencies, and in particular the Helmholtz resonance mode by using a linear fixed-geometry model and assuming that the connecting channel between the two Siracusa ports is closed. Second, we repeated the analysis by using a non-linear simulation model accounting for flooding and for an open connection channel. Eventually, we forced the harbour by means of synthetic signals with amplitude, period and duration of the main historical tsunamis attacking Siracusa, namely the AD 365, the 1693 and the 1908 tsunami events. In this last case our attention was also focused on quantifying the role of the existing breakwaters in mitigating the incoming tsunami.

  14. Optimizing financial effects of HIE: a multi-party linear programming approach.

    PubMed

    Sridhar, Srikrishna; Brennan, Patricia Flatley; Wright, Stephen J; Robinson, Stephen M

    2012-01-01

    To describe an analytical framework for quantifying the societal savings and financial consequences of a health information exchange (HIE), and to demonstrate its use in designing pricing policies for sustainable HIEs. We developed a linear programming model to (1) quantify the financial worth of HIE information to each of its participating institutions and (2) evaluate three HIE pricing policies: fixed-rate annual, charge per visit, and charge per look-up. We considered three desired outcomes of HIE-related emergency care (modeled as parameters): preventing unrequired hospitalizations, reducing duplicate tests, and avoiding emergency department (ED) visits. We applied this framework to 4639 ED encounters over a 12-month period in three large EDs in Milwaukee, Wisconsin, using Medicare/Medicaid claims data, public reports of hospital admissions, published payer mix data, and use data from a not-for-profit regional HIE. For this HIE, data accesses produced net financial gains for all providers and payers. Gains, due to HIE, were more significant for providers with more health maintenance organizations patients. Reducing unrequired hospitalizations and avoiding repeat ED visits were responsible for more than 70% of the savings. The results showed that fixed annual subscriptions can sustain this HIE, while ensuring financial gains to all participants. Sensitivity analysis revealed that the results were robust to uncertainties in modeling parameters. Our specific HIE pricing recommendations depend on the unique characteristics of this study population. However, our main contribution is the modeling approach, which is broadly applicable to other populations.

  15. Seasonal control skylight glazing panel with passive solar energy switching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, J.V.

    1983-10-25

    A substantially transparent one-piece glazing panel is provided for generally horizontal mounting in a skylight. The panel is comprised of an repeated pattern of two alternating and contiguous linear optical elements; a first optical element being an upstanding generally right-triangular linear prism, and the second optical element being an upward-facing plano-cylindrical lens in which the planar surface is reflectively opaque and is generally in the same plane as the base of the triangular prism.

  16. Matrix-Free Polynomial-Based Nonlinear Least Squares Optimized Preconditioning and its Application to Discontinuous Galerkin Discretizations of the Euler Equations

    DTIC Science & Technology

    2015-06-01

    cient parallel code for applying the operator. Our method constructs a polynomial preconditioner using a nonlinear least squares (NLLS) algorithm. We show...apply the underlying operator. Such a preconditioner can be very attractive in scenarios where one has a highly efficient parallel code for applying...repeatedly solve a large system of linear equations where one has an extremely fast parallel code for applying an underlying fixed linear operator

  17. Repeatability of dose painting by numbers treatment planning in prostate cancer radiotherapy based on multiparametric magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    van Schie, Marcel A.; Steenbergen, Peter; Viet Dinh, Cuong; Ghobadi, Ghazaleh; van Houdt, Petra J.; Pos, Floris J.; Heijmink, Stijn W. T. J. P.; van der Poel, Henk G.; Renisch, Steffen; Vik, Torbjørn; van der Heide, Uulke A.

    2017-07-01

    Dose painting by numbers (DPBN) refers to a voxel-wise prescription of radiation dose modelled from functional image characteristics, in contrast to dose painting by contours which requires delineations to define the target for dose escalation. The direct relation between functional imaging characteristics and DPBN implies that random variations in images may propagate into the dose distribution. The stability of MR-only prostate cancer treatment planning based on DPBN with respect to these variations is as yet unknown. We conducted a test-retest study to investigate the stability of DPBN for prostate cancer in a semi-automated MR-only treatment planning workflow. Twelve patients received a multiparametric MRI on two separate days prior to prostatectomy. The tumor probability (TP) within the prostate was derived from image features with a logistic regression model. Dose mapping functions were applied to acquire a DPBN prescription map that served to generate an intensity modulated radiation therapy (IMRT) treatment plan. Dose calculations were done on a pseudo-CT derived from the MRI. The TP and DPBN map and the IMRT dose distribution were compared between both MRI sessions, using the intraclass correlation coefficient (ICC) to quantify repeatability of the planning pipeline. The quality of each treatment plan was measured with a quality factor (QF). Median ICC values for the TP and DPBN map and the IMRT dose distribution were 0.82, 0.82 and 0.88, respectively, for linear dose mapping and 0.82, 0.84 and 0.94 for square root dose mapping. A median QF of 3.4% was found among all treatment plans. We demonstrated the stability of DPBN radiotherapy treatment planning in prostate cancer, with excellent overall repeatability and acceptable treatment plan quality. Using validated tumor probability modelling and simple dose mapping techniques it was shown that despite day-to-day variations in imaging data still consistent treatment plans were obtained.

  18. Isolating the cow-specific part of residual energy intake in lactating dairy cows using random regressions.

    PubMed

    Fischer, A; Friggens, N C; Berry, D P; Faverdin, P

    2018-07-01

    The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of measured NEI. Among all compared mixed models, the variance of the cow-specific part of predicted NEI represented between 53% and 59% of the variance of REI estimated from the lactation average model or between 4% and 5% of the variance of measured NEI. The remaining 41% to 47% of the variance of REI estimated with the lactation average model may therefore reflect model fitting errors or measurement errors. In conclusion, the use of a mixed model framework with cow-specific random regressions seems to be a promising method to isolate the cow-specific component of REI in dairy cows.

  19. Sensitivity Gains, Linearity, and Spectral Reproducibility in Nonuniformly Sampled Multidimensional MAS NMR Spectra of High Dynamic Range.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suiter, Christopher L.; Paramasivam, Sivakumar; Hou, Guangjin

    Recently, we have demonstrated that considerable inherent sensitivity gains are attained in MAS NMR spectra acquired by nonuniform sampling (NUS) and introduced maximum entropy interpolation (MINT) processing that assures the linearity of transformation between the time and frequency domains. In this report, we examine the utility of the NUS/MINT approach in multidimensional datasets possessing high dynamic range, such as homonuclear 13C–13C correlation spectra. We demonstrate on model compounds and on 1–73-(U-13C,15N)/74–108-(U-15N) E. coli thioredoxin reassembly, that with appropriately constructed 50 % NUS schedules inherent sensitivity gains of 1.7–2.1-fold are readily reached in such datasets. We show that both linearity andmore » line width are retained under these experimental conditions throughout the entire dynamic range of the signals. Furthermore, we demonstrate that the reproducibility of the peak intensities is excellent in the NUS/MINT approach when experiments are repeated multiple times and identical experimental and processing conditions are employed. Finally, we discuss the principles for design and implementation of random exponentially biased NUS sampling schedules for homonuclear 13C–13C MAS correlation experiments that yield high quality artifact-free datasets.« less

  20. Sensitivity gains, linearity, and spectral reproducibility in nonuniformly sampled multidimensional MAS NMR spectra of high dynamic range

    PubMed Central

    Suiter, Christopher L.; Paramasivam, Sivakumar; Hou, Guangjin; Sun, Shangjin; Rice, David; Hoch, Jeffrey C.; Rovnyak, David

    2014-01-01

    Recently, we have demonstrated that considerable inherent sensitivity gains are attained in MAS NMR spectra acquired by nonuniform sampling (NUS) and introduced maximum entropy interpolation (MINT) processing that assures the linearity of transformation between the time and frequency domains. In this report, we examine the utility of the NUS/MINT approach in multidimensional datasets possessing high dynamic range, such as homonuclear 13C–13C correlation spectra. We demonstrate on model compounds and on 1–73-(U-13C, 15N)/74–108-(U-15N) E. coli thioredoxin reassembly, that with appropriately constructed 50 % NUS schedules inherent sensitivity gains of 1.7–2.1-fold are readily reached in such datasets. We show that both linearity and line width are retained under these experimental conditions throughout the entire dynamic range of the signals. Furthermore, we demonstrate that the reproducibility of the peak intensities is excellent in the NUS/MINT approach when experiments are repeated multiple times and identical experimental and processing conditions are employed. Finally, we discuss the principles for design and implementation of random exponentially biased NUS sampling schedules for homonuclear 13C–13C MAS correlation experiments that yield high-quality artifact-free datasets. PMID:24752819

  1. Non-Linear Dynamics of Saturn's Rings

    NASA Astrophysics Data System (ADS)

    Esposito, L. W.

    2016-12-01

    Non-linear processes can explain why Saturn's rings are so active and dynamic. Ring systems differ from simple linear systems in two significant ways: 1. They are systems of granular material: where particle-to-particle collisions dominate; thus a kinetic, not a fluid description needed. Stresses are strikingly inhomogeneous and fluctuations are large compared to equilibrium. 2. They are strongly forced by resonances: which drive a non-linear response, that push the system across thresholds that lead to persistent states. Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit, with relative velocity ranging from nearly zero to a multiple of the orbit average. Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like `straw' that can explain the halo morphology and spectroscopy: Cyclic velocity changes cause perturbed regions to reach higher collision speeds at some orbital phases, which preferentially removes small regolith particles; surrounding particles diffuse back too slowly to erase the effect: this gives the halo morphology; this requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km).Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing.Ring dynamics and history implications: Moon-triggered clumping explains both small and large particles at resonances. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating it as an asymmetric random walk with reflecting boundaries determines the power law index, using results of numerical simulations in the tidal environment. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn's rings a chaotic non-linear driven system?

  2. Non-Linear Dynamics of Saturn’s Rings

    NASA Astrophysics Data System (ADS)

    Esposito, Larry W.

    2015-11-01

    Non-linear processes can explain why Saturn’s rings are so active and dynamic. Ring systems differ from simple linear systems in two significant ways: 1. They are systems of granular material: where particle-to-particle collisions dominate; thus a kinetic, not a fluid description needed. We find that stresses are strikingly inhomogeneous and fluctuations are large compared to equilibrium. 2. They are strongly forced by resonances: which drive a non-linear response, pushing the system across thresholds that lead to persistent states.Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity, and allows aggregates to grow. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit.Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like ‘straw’ that can explain the halo structure and spectroscopy: This requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km).Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing.Ring dynamics and history implications: Moon-triggered clumping at perturbed regions in Saturn’s rings creates both high velocity dispersion and large aggregates at these distances, explaining both small and large particles observed there. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating the Markov chain as an asymmetric random walk with reflecting boundaries allows us to determine the power law index from results of numerical simulations in the tidal environment surrounding Saturn. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn’s rings a chaotic non-linear driven system?

  3. Developmental changes rather than repeated administration drive paracetamol glucuronidation in neonates and infants.

    PubMed

    Krekels, Elke H J; van Ham, Saskia; Allegaert, Karel; de Hoon, Jan; Tibboel, Dick; Danhof, Meindert; Knibbe, Catherijne A J

    2015-09-01

    Based on recovered metabolite ratios in urine, it has been concluded that paracetamol glucuronidation may be up-regulated upon multiple dosing. This study investigates paracetamol clearance in neonates and infants after single and multiple dosing using a population modelling approach. A population pharmacokinetic model was developed in NONMEM VI, based on paracetamol plasma concentrations from 54 preterm and term neonates and infants, and on paracetamol, paracetamol-glucuronide and paracetamol-sulphate amounts in urine from 22 of these patients. Patients received either a single intravenous propacetamol dose or up to 12 repeated doses. Paracetamol and metabolite disposition was best described with one-compartment models. The formation clearance of paracetamol-sulphate was 1.46 mL/min/kg(1.4), which was about 5.5 times higher than the formation clearance of the glucuronide of 0.266 mL/min/kg. The renal excretion rate constants of both metabolites was estimated to be 11.4 times higher than the excretion rate constant of unchanged paracetamol, yielding values of 0.580 mL/min/kg. Developmental changes were best described by bodyweight in linear relationships on the distribution volumes, the formation of paracetamol-glucuronide and the unchanged excretion of paracetamol, and in an exponential relationship on the formation of paracetamol-sulphate. There was no evidence for up-regulation or other time-varying changes in any of the model parameters. Simulations with this model illustrate how paracetamol-glucuronide recovery in urine increases over time due to the slower formation of this metabolite and in the absence of up-regulation. Developmental changes, described by bodyweight-based functions, rather than up-regulation, explain developmental changes in paracetamol disposition in neonates and infants.

  4. Using a dynamical advection to reconstruct a part of the SSH evolution in the context of SWOT, application to the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Rogé, Marine; Morrow, Rosemary; Ubelmann, Clément; Dibarboure, Gérald

    2017-08-01

    The main oceanographic objective of the future SWOT mission is to better characterize the ocean mesoscale and sub-mesoscale circulation, by observing a finer range of ocean topography dynamics down to 20 km wavelength. Despite the very high spatial resolution of the future satellite, it will not capture the time evolution of the shorter mesoscale signals, such as the formation and evolution of small eddies. SWOT will have an exact repeat cycle of 21 days, with near repeats around 5-10 days, depending on the latitude. Here, we investigate a technique to reconstruct the missing 2D SSH signal in the time between two satellite revisits. We use the dynamical interpolation (DI) technique developed by Ubelmann et al. (2015). Based on potential vorticity (hereafter PV) conservation using a one and a half layer quasi-geostrophic model, it features an active advection of the SSH field. This model has been tested in energetic open ocean regions such as the Gulf Stream and the Californian Current, and has given promising results. Here, we test this model in the Western Mediterranean Sea, a lower energy region with complex small scale physics, and compare the SSH reconstruction with the high-resolution Symphonie model. We investigate an extension of the simple dynamical model including a separated mean circulation. We find that the DI gives a 16-18% improvement in the reconstruction of the surface height and eddy kinetic energy fields, compared with a simple linear interpolation, and a 37% improvement in the Northern Current subregion. Reconstruction errors are higher during winter and autumn but statistically, the improvement from the DI is also better for these seasons.

  5. A Novel Marker Based Method to Teeth Alignment in MRI

    NASA Astrophysics Data System (ADS)

    Luukinen, Jean-Marc; Aalto, Daniel; Malinen, Jarmo; Niikuni, Naoko; Saunavaara, Jani; Jääsaari, Päivi; Ojalammi, Antti; Parkkola, Riitta; Soukka, Tero; Happonen, Risto-Pekka

    2018-04-01

    Magnetic resonance imaging (MRI) can precisely capture the anatomy of the vocal tract. However, the crowns of teeth are not visible in standard MRI scans. In this study, a marker-based teeth alignment method is presented and evaluated. Ten patients undergoing orthognathic surgery were enrolled. Supraglottal airways were imaged preoperatively using structural MRI. MRI visible markers were developed, and they were attached to maxillary teeth and corresponding locations on the dental casts. Repeated measurements of intermarker distances in MRI and in a replica model was compared using linear regression analysis. Dental cast MRI and corresponding caliper measurements did not differ significantly. In contrast, the marker locations in vivo differed somewhat from the dental cast measurements likely due to marker placement inaccuracies. The markers were clearly visible in MRI and allowed for dental models to be aligned to head and neck MRI scans.

  6. An open-population hierarchical distance sampling model

    USGS Publications Warehouse

    Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,

    2015-01-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  7. An open-population hierarchical distance sampling model.

    PubMed

    Sollmann, Rahel; Gardner, Beth; Chandler, Richard B; Royle, J Andrew; Sillett, T Scott

    2015-02-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for Island Scrub-Jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying numbers of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  8. Is it sufficient to repeat LINEAR accelerator stereotactic radiosurgery in choroidal melanoma?

    PubMed

    Furdova, A; Horkovicova, K; Justusova, P; Sramka, M

    One day session LINAC based stereotactic radiosurgery (SRS) at LINAC accelerator is a method of "conservative" attitude to treat the intraocular malignant uveal melanoma. We used model Clinac 600 C/D Varian (system Aria, planning system Corvus version 6.2 verification IMRT OmniPro) with 6 MeV X by rigid immobilization of the eye to the Leibinger frame. The stereotactic treatment planning after fusion of CT and MRI was optimized according to the critical structures (lens, optic nerve, also lens and optic nerve at the contralateral side, chiasm). The first plan was compared and the best plan was applied for therapy at C LINAC accelerator. The planned therapeutic dose was 35.0 Gy by 99 % of DVH (dose volume histogram). In our clinical study in the group of 125 patients with posterior uveal melanoma treated with SRS, in 2 patients (1.6 %) was repeated SRS indicated. Patient age of the whole group ranged from 25 to 81 years with a median of 54 TD was 35.0 Gy. In 2 patients after 5 year interval after stereotactic radiosurgery for uveal melanoma stage T1, the tumor volume increased to 50 % of the primary tumor volume and repeated SRS was necessary. To find out the changes in melanoma characteristics after SRS in long term interval after irradiation is necessary to follow up the patient by an ophthalmologist regularly. One step LINAC based stereotactic radiosurgery with a single dose 35.0 Gy is one of treatment options to treat T1 to T3 stage posterior uveal melanoma and to preserve the eye globe. In some cases it is possible to repeat the SRS after more than 5 year interval (Fig. 8, Ref. 23).

  9. Preoperative implant selection for unilateral breast reconstruction using 3D imaging with the Microsoft Kinect sensor.

    PubMed

    Pöhlmann, Stefanie T L; Harkness, Elaine; Taylor, Christopher J; Gandhi, Ashu; Astley, Susan M

    2017-08-01

    This study aimed to investigate whether breast volume measured preoperatively using a Kinect 3D sensor could be used to determine the most appropriate implant size for reconstruction. Ten patients underwent 3D imaging before and after unilateral implant-based reconstruction. Imaging used seven configurations, varying patient pose and Kinect location, which were compared regarding suitability for volume measurement. Four methods of defining the breast boundary for automated volume calculation were compared, and repeatability assessed over five repetitions. The most repeatable breast boundary annotation used an ellipse to track the inframammary fold and a plane describing the chest wall (coefficient of repeatability: 70 ml). The most reproducible imaging position comparing pre- and postoperative volume measurement of the healthy breast was achieved for the sitting patient with elevated arms and Kinect centrally positioned (coefficient of repeatability: 141 ml). Optimal implant volume was calculated by correcting used implant volume by the observed postoperative asymmetry. It was possible to predict implant size using a linear model derived from preoperative volume measurement of the healthy breast (coefficient of determination R 2  = 0.78, standard error of prediction 120 ml). Mastectomy specimen weight and experienced surgeons' choice showed similar predictive ability (both: R 2  = 0.74, standard error: 141/142 ml). A leave one-out validation showed that in 61% of cases, 3D imaging could predict implant volume to within 10%; however for 17% of cases it was >30%. This technology has the potential to facilitate reconstruction surgery planning and implant procurement to maximise symmetry after unilateral reconstruction. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  10. Laboratory validation of a new gas-enhanced dentine liquid permeation evaluation system.

    PubMed

    Al-Jadaa, Anas; Attin, Thomas; Peltomäki, Timo; Heumann, Christian; Schmidlin, Patrick R

    2014-12-01

    To validate a new automated dentine permeability testing platform based on pressure change measurements. A split chamber was designed allowing for concomitant measurement of fluid permeation and pressure difference. In a first test, system reliability was assessed by interposing a solid metal disk, embedded composite resin disks, or teeth by consecutively measuring eight times under standardized conditions. Secondly, the repeatability and applicability of the method was tested in a dentine wound model by using intact third molars: Class I (2 × 5 mm) and a full occlusal preparation as well a ceramic restoration were consecutively performed and repeatedly measured eight times each. In the last test, the system detection limit as well correlation between gas pressure difference and liquid permeation were evaluated: Again, third molars were used and occlusal preparations of increasing size (2 × 5, 3 × 5, 4 × 5, and 5 × 5 mm and full occlusal preparations, respectively) were made. Data was analyzed for the linearity of measurement, and R (2) values were calculated. The embedding procedure allowed for perfect separation of the two chambers, and no significant variation in repeated measurements of evaluated samples for the respective treatments (p = 0.05) was found. The detection was 0.002 hPa/min for the pressure slope and 0.0225 μl/min for the fluid infiltration, respectively. The saline volume was highly correlating to the gas pressure changes (R (2) = 0.996, p < 0.0001). The presented method is a reliable and exact tool to assess dentine permeability by nondestructive and repeatable measurements. This method is suitable for measurements and comparison of the effectiveness of dentine wounds sealing materials.

  11. Combined effects of repeated oral hygiene motivation and type of toothbrush on orthodontic patients: a blind randomized clinical trial.

    PubMed

    Marini, Ida; Bortolotti, Francesco; Parenti, Serena Incerti; Gatto, Maria Rosaria; Bonetti, Giulio Alessandri

    2014-09-01

    To investigate the effects on plaque index (PI) scores of manual or electric toothbrush with or without repeated oral hygiene instructions (OHI) and motivation on patients wearing fixed orthodontic appliances. One month after the orthodontic fixed appliance bonding on both arches, 60 patients were randomly assigned to four groups; groups E1 (n  =  15) and E2 (n  =  15) received a powered rotating-oscillating toothbrush, and groups M1 (n  =  15) and M2 (n  =  15) received a manual toothbrush. Groups E1 and M1 received OHI and motivation at baseline (T0) and after 4, 8, 12, 16, and 20 weeks (T4, T8, T12, T16, and T20, respectively) by a Registered Dental Hygienist; groups E2 and M2 received OHI and motivation only at baseline. At each time point a blinded examiner scored plaque of all teeth using the modified Quigley-Hein PI. In all groups the PI score decreased significantly over time, and there were differences among groups at T8, T12, T16, and T20. At T8, PI scores of group E1 were lower than those of group E2, and at T12, T16, and T20, PI scores of groups M1 and E1 were lower compared to those of groups M2 and E2. A linear mixed model showed that the effect of repeated OHI and motivation during time was statistically significant, independently from the use of manual or electric toothbrush. The present results showed that repeated OHI and motivation are crucial in reducing PI score in orthodontic patients, independent of the type of toothbrush used.

  12. Oxidative Stress Measures of Lipid and DNA Damage in Human Tears.

    PubMed

    Haworth, Kristina M; Chandler, Heather L

    2017-05-01

    We evaluate feasibility and repeatability of measures for lipid peroxidation and DNA oxidation in human tears, as well as relationships between outcome variables, and compared our findings to previously reported methods of evaluation for ocular sun exposure. A total of 50 volunteers were seen for 2 visits 14 ± 2 days apart. Tear samples were collected from the inferior tear meniscus using a glass microcapillary tube. Oxidative stress biomarkers were quantified using enzyme-linked immunosorbent assay (ELISA): lipid peroxidation by measurement of hexanoyl-lysine (HEL) expression; DNA oxidation by measurement of 8-oxo-2'-deoxyguinosone (8OHdG) expression. Descriptive statistics were generated. Repeatability estimates were made using Bland-Altman plots with mean differences and 95% limits of agreement were calculated. Linear regression was conducted to evaluate relationships between measures. Mean (±SD) values for tear HEL and 8OHdG expression were 17368.02 (±9878.42) nmol/L and 66.13 (±19.99) ng/mL, respectively. Repeatability was found to be acceptable for both HEL and 8OHdG expression. Univariate linear regression supported tear 8OHdG expression and spring season of collection to be predictors of higher tear HEL expression; tear HEL expression was confirmed as a predictor of higher tear 8OHdG expression. We demonstrate feasibility and repeatability of estimating previously unreported tear 8OHdG expression. Seasonal temperature variation and other factors may influence tear lipid peroxidation. Support is demonstrated to suggest lipid damage and DNA damage occur concurrently on the human ocular surface.

  13. Functional Angucycline-Like Antibiotic Gene Cluster in the Terminal Inverted Repeats of the Streptomyces ambofaciens Linear Chromosome

    PubMed Central

    Pang, Xiuhua; Aigle, Bertrand; Girardet, Jean-Michel; Mangenot, Sophie; Pernodet, Jean-Luc; Decaris, Bernard; Leblond, Pierre

    2004-01-01

    Streptomyces ambofaciens has an 8-Mb linear chromosome ending in 200-kb terminal inverted repeats. Analysis of the F6 cosmid overlapping the terminal inverted repeats revealed a locus similar to type II polyketide synthase (PKS) gene clusters. Sequence analysis identified 26 open reading frames, including genes encoding the β-ketoacyl synthase (KS), chain length factor (CLF), and acyl carrier protein (ACP) that make up the minimal PKS. These KS, CLF, and ACP subunits are highly homologous to minimal PKS subunits involved in the biosynthesis of angucycline antibiotics. The genes encoding the KS and ACP subunits are transcribed constitutively but show a remarkable increase in expression after entering transition phase. Five genes, including those encoding the minimal PKS, were replaced by resistance markers to generate single and double mutants (replacement in one and both terminal inverted repeats). Double mutants were unable to produce either diffusible orange pigment or antibacterial activity against Bacillus subtilis. Single mutants showed an intermediate phenotype, suggesting that each copy of the cluster was functional. Transformation of double mutants with a conjugative and integrative form of F6 partially restored both phenotypes. The pigmented and antibacterial compounds were shown to be two distinct molecules produced from the same biosynthetic pathway. High-pressure liquid chromatography analysis of culture extracts from wild-type and double mutants revealed a peak with an associated bioactivity that was absent from the mutants. Two additional genes encoding KS and CLF were present in the cluster. However, disruption of the second KS gene had no effect on either pigment or antibiotic production. PMID:14742212

  14. Application of Fuzzy-Logic Controller and Neural Networks Controller in Gas Turbine Speed Control and Overheating Control and Surge Control on Transient Performance

    NASA Astrophysics Data System (ADS)

    Torghabeh, A. A.; Tousi, A. M.

    2007-08-01

    This paper presents Fuzzy Logic and Neural Networks approach to Gas Turbine Fuel schedules. Modeling of non-linear system using feed forward artificial Neural Networks using data generated by a simulated gas turbine program is introduced. Two artificial Neural Networks are used , depicting the non-linear relationship between gas generator speed and fuel flow, and turbine inlet temperature and fuel flow respectively . Off-line fast simulations are used for engine controller design for turbojet engine based on repeated simulation. The Mamdani and Sugeno models are used to expression the Fuzzy system . The linguistic Fuzzy rules and membership functions are presents and a Fuzzy controller will be proposed to provide an Open-Loop control for the gas turbine engine during acceleration and deceleration . MATLAB Simulink was used to apply the Fuzzy Logic and Neural Networks analysis. Both systems were able to approximate functions characterizing the acceleration and deceleration schedules . Surge and Flame-out avoidance during acceleration and deceleration phases are then checked . Turbine Inlet Temperature also checked and controls by Neural Networks controller. This Fuzzy Logic and Neural Network Controllers output results are validated and evaluated by GSP software . The validation results are used to evaluate the generalization ability of these artificial Neural Networks and Fuzzy Logic controllers.

  15. Quasi-linear diffusion coefficients for highly oblique whistler mode waves

    NASA Astrophysics Data System (ADS)

    Albert, J. M.

    2017-05-01

    Quasi-linear diffusion coefficients are considered for highly oblique whistler mode waves, which exhibit a singular "resonance cone" in cold plasma theory. The refractive index becomes both very large and rapidly varying as a function of wave parameters, making the diffusion coefficients difficult to calculate and to characterize. Since such waves have been repeatedly observed both outside and inside the plasmasphere, this problem has received renewed attention. Here the diffusion equations are analytically treated in the limit of large refractive index μ. It is shown that a common approximation to the refractive index allows the associated "normalization integral" to be evaluated in closed form and that this can be exploited in the numerical evaluation of the exact expression. The overall diffusion coefficient formulas for large μ are then reduced to a very simple form, and the remaining integral and sum over resonances are approximated analytically. These formulas are typically written for a modeled distribution of wave magnetic field intensity, but this may not be appropriate for highly oblique whistlers, which become quasi-electrostatic. Thus, the analysis is also presented in terms of wave electric field intensity. The final results depend strongly on the maximum μ (or μ∥) used to model the wave distribution, so realistic determination of these limiting values becomes paramount.

  16. Linear Combinations of Multiple Outcome Measures to Improve the Power of Efficacy Analysis ---Application to Clinical Trials on Early Stage Alzheimer Disease

    PubMed Central

    Xiong, Chengjie; Luo, Jingqin; Morris, John C; Bateman, Randall

    2018-01-01

    Modern clinical trials on Alzheimer disease (AD) focus on the early symptomatic stage or even the preclinical stage. Subtle disease progression at the early stages, however, poses a major challenge in designing such clinical trials. We propose a multivariate mixed model on repeated measures to model the disease progression over time on multiple efficacy outcomes, and derive the optimum weights to combine multiple outcome measures by minimizing the sample sizes to adequately power the clinical trials. A cross-validation simulation study is conducted to assess the accuracy for the estimated weights as well as the improvement in reducing the sample sizes for such trials. The proposed methodology is applied to the multiple cognitive tests from the ongoing observational study of the Dominantly Inherited Alzheimer Network (DIAN) to power future clinical trials in the DIAN with a cognitive endpoint. Our results show that the optimum weights to combine multiple outcome measures can be accurately estimated, and that compared to the individual outcomes, the combined efficacy outcome with these weights significantly reduces the sample size required to adequately power clinical trials. When applied to the clinical trial in the DIAN, the estimated linear combination of six cognitive tests can adequately power the clinical trial. PMID:29546251

  17. Feature-based Alignment of Volumetric Multi-modal Images

    PubMed Central

    Toews, Matthew; Zöllei, Lilla; Wells, William M.

    2014-01-01

    This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955

  18. Method validation for control determination of mercury in fresh fish and shrimp samples by solid sampling thermal decomposition/amalgamation atomic absorption spectrometry.

    PubMed

    Torres, Daiane Placido; Martins-Teixeira, Maristela Braga; Cadore, Solange; Queiroz, Helena Müller

    2015-01-01

    A method for the determination of total mercury in fresh fish and shrimp samples by solid sampling thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) has been validated following international foodstuff protocols in order to fulfill the Brazilian National Residue Control Plan. The experimental parameters have been previously studied and optimized according to specific legislation on validation and inorganic contaminants in foodstuff. Linearity, sensitivity, specificity, detection and quantification limits, precision (repeatability and within-laboratory reproducibility), robustness as well as accuracy of the method have been evaluated. Linearity of response was satisfactory for the two range concentrations available on the TDA AAS equipment, between approximately 25.0 and 200.0 μg kg(-1) (square regression) and 250.0 and 2000.0 μg kg(-1) (linear regression) of mercury. The residues for both ranges were homoscedastic and independent, with normal distribution. Correlation coefficients obtained for these ranges were higher than 0.995. Limits of quantification (LOQ) and of detection of the method (LDM), based on signal standard deviation (SD) for a low-in-mercury sample, were 3.0 and 1.0 μg kg(-1), respectively. Repeatability of the method was better than 4%. Within-laboratory reproducibility achieved a relative SD better than 6%. Robustness of the current method was evaluated and pointed sample mass as a significant factor. Accuracy (assessed as the analyte recovery) was calculated on basis of the repeatability, and ranged from 89% to 99%. The obtained results showed the suitability of the present method for direct mercury measurement in fresh fish and shrimp samples and the importance of monitoring the analysis conditions for food control purposes. Additionally, the competence of this method was recognized by accreditation under the standard ISO/IEC 17025.

  19. In vitro validation of a new respiratory ultrasonic plethysmograph.

    PubMed

    Schramel, Johannes; van den Hoven, René; Moens, Yves

    2012-07-01

    The in-vitro validation of a novel Respiratory Ultrasonic Plethysmography (RUP) system designed to detect circumference changes of rib cage and abdominal compartments in large and small animals. Experimental in vitro study. The experimental system includes two compliant fluid-filled rubber tubes functioning as ultrasonic waveguides. Each has an ultrasonic transmitter and a detector at the opposing ends. Sensor length can be individually adapted in the range of 0.15-2 m. Data are downloaded to a computer at a sampling rate of 10 or 100 Hz. Measurements have a resolution of 0.3 mm. Baseline stability, linearity and repeatability were investigated with dedicated experiments. The base line drift was tested measuring a fixed distance for 2 hours continuously and then 18 hours later. A hand-operated horse thorax dummy (elliptically shaped, circumference 1.73 m) was used to compare waveforms of RUP with a respiratory inductive plethysmograph (RIP). The electromagnetic interference was tested by approaching metallic objects. Baseline drift and repeatability (10 repeated steps of 1.6% and 6.6% elongations and contractions) were within ± 0.3 mm. The response of the system for tube stretching up to 11% of total length was linear with a coefficient of determination for linearity of 0.998. In contrast to RIP, electromagnetic interference could not be observed with RUP. The low baseline drift and the lack of electromagnetic interference favours the use of RUP compared to an RIP device when studying the breathing pattern and end expiratory lung volume changes in conscious and anaesthetized animals. © 2012 The Authors. Veterinary Anaesthesia and Analgesia. © 2012 Association of Veterinary Anaesthetists and the American College of Veterinary Anesthesiologists.

  20. [Computer optical topography: a study of the repeatability of the results of human body model examination].

    PubMed

    Sarnadskiĭ, V N

    2007-01-01

    The problem of repeatability of the results of examination of a plastic human body model is considered. The model was examined in 7 positions using an optical topograph for kyphosis diagnosis. The examination was performed under television camera monitoring. It was shown that variation of the model position in the camera view affected the repeatability of the results of topographic examination, especially if the model-to-camera distance was changed. A study of the repeatability of the results of optical topographic examination can help to increase the reliability of the topographic method, which is widely used for medical screening of children and adolescents.

  1. A predictive model for early mortality after surgical treatment of heart valve or prosthesis infective endocarditis. The EndoSCORE.

    PubMed

    Di Mauro, Michele; Dato, Guglielmo Mario Actis; Barili, Fabio; Gelsomino, Sandro; Santè, Pasquale; Corte, Alessandro Della; Carrozza, Antonio; Ratta, Ester Della; Cugola, Diego; Galletti, Lorenzo; Devotini, Roger; Casabona, Riccardo; Santini, Francesco; Salsano, Antonio; Scrofani, Roberto; Antona, Carlo; Botta, Luca; Russo, Claudio; Mancuso, Samuel; Rinaldi, Mauro; De Vincentiis, Carlo; Biondi, Andrea; Beghi, Cesare; Cappabianca, Giangiuseppe; Tarzia, Vincenzo; Gerosa, Gino; De Bonis, Michele; Pozzoli, Alberto; Nicolini, Francesco; Benassi, Filippo; Rosato, Francesco; Grasso, Elena; Livi, Ugolino; Sponga, Sandro; Pacini, Davide; Di Bartolomeo, Roberto; De Martino, Andrea; Bortolotti, Uberto; Onorati, Francesco; Faggian, Giuseppe; Lorusso, Roberto; Vizzardi, Enrico; Di Giammarco, Gabriele; Marinelli, Daniele; Villa, Emmanuel; Troise, Giovanni; Picichè, Marco; Musumeci, Francesco; Paparella, Domenico; Margari, Vito; Tritto, Francesco; Damiani, Girolamo; Scrascia, Giuseppe; Zaccaria, Salvatore; Renzulli, Attilio; Serraino, Giuseppe; Mariscalco, Giovanni; Maselli, Daniele; Foschi, Massimiliano; Parolari, Alessandro; Nappi, Giannantonio

    2017-08-15

    The aim of this large retrospective study was to provide a logistic risk model along an additive score to predict early mortality after surgical treatment of patients with heart valve or prosthesis infective endocarditis (IE). From 2000 to 2015, 2715 patients with native valve endocarditis (NVE) or prosthesis valve endocarditis (PVE) were operated on in 26 Italian Cardiac Surgery Centers. The relationship between early mortality and covariates was evaluated with logistic mixed effect models. Fixed effects are parameters associated with the entire population or with certain repeatable levels of experimental factors, while random effects are associated with individual experimental units (centers). Early mortality was 11.0% (298/2715); At mixed effect logistic regression the following variables were found associated with early mortality: age class, female gender, LVEF, preoperative shock, COPD, creatinine value above 2mg/dl, presence of abscess, number of treated valve/prosthesis (with respect to one treated valve/prosthesis) and the isolation of Staphylococcus aureus, Fungus spp., Pseudomonas Aeruginosa and other micro-organisms, while Streptococcus spp., Enterococcus spp. and other Staphylococci did not affect early mortality, as well as no micro-organisms isolation. LVEF was found linearly associated with outcomes while non-linear association between mortality and age was tested and the best model was found with a categorization into four classes (AUC=0.851). The following study provides a logistic risk model to predict early mortality in patients with heart valve or prosthesis infective endocarditis undergoing surgical treatment, called "The EndoSCORE". Copyright © 2017. Published by Elsevier B.V.

  2. A Combined Pharmacokinetic and Radiologic Assessment of Dynamic Contrast-Enhanced Magnetic Resonance Imaging Predicts Response to Chemoradiation in Locally Advanced Cervical Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Semple, Scott; Harry, Vanessa N. MRCOG.; Parkin, David E.

    2009-10-01

    Purpose: To investigate the combination of pharmacokinetic and radiologic assessment of dynamic contrast-enhanced magnetic resonance imaging (MRI) as an early response indicator in women receiving chemoradiation for advanced cervical cancer. Methods and Materials: Twenty women with locally advanced cervical cancer were included in a prospective cohort study. Dynamic contrast-enhanced MRI was carried out before chemoradiation, after 2 weeks of therapy, and at the conclusion of therapy using a 1.5-T MRI scanner. Radiologic assessment of uptake parameters was obtained from resultant intensity curves. Pharmacokinetic analysis using a multicompartment model was also performed. General linear modeling was used to combine radiologic andmore » pharmacokinetic parameters and correlated with eventual response as determined by change in MRI tumor size and conventional clinical response. A subgroup of 11 women underwent repeat pretherapy MRI to test pharmacokinetic reproducibility. Results: Pretherapy radiologic parameters and pharmacokinetic K{sup trans} correlated with response (p < 0.01). General linear modeling demonstrated that a combination of radiologic and pharmacokinetic assessments before therapy was able to predict more than 88% of variance of response. Reproducibility of pharmacokinetic modeling was confirmed. Conclusions: A combination of radiologic assessment with pharmacokinetic modeling applied to dynamic MRI before the start of chemoradiation improves the predictive power of either by more than 20%. The potential improvements in therapy response prediction using this type of combined analysis of dynamic contrast-enhanced MRI may aid in the development of more individualized, effective therapy regimens for this patient group.« less

  3. Chemometric brand differentiation of commercial spices using direct analysis in real time mass spectrometry.

    PubMed

    Pavlovich, Matthew J; Dunn, Emily E; Hall, Adam B

    2016-05-15

    Commercial spices represent an emerging class of fuels for improvised explosives. Being able to classify such spices not only by type but also by brand would represent an important step in developing methods to analytically investigate these explosive compositions. Therefore, a combined ambient mass spectrometric/chemometric approach was developed to quickly and accurately classify commercial spices by brand. Direct analysis in real time mass spectrometry (DART-MS) was used to generate mass spectra for samples of black pepper, cayenne pepper, and turmeric, along with four different brands of cinnamon, all dissolved in methanol. Unsupervised learning techniques showed that the cinnamon samples clustered according to brand. Then, we used supervised machine learning algorithms to build chemometric models with a known training set and classified the brands of an unknown testing set of cinnamon samples. Ten independent runs of five-fold cross-validation showed that the training set error for the best-performing models (i.e., the linear discriminant and neural network models) was lower than 2%. The false-positive percentages for these models were 3% or lower, and the false-negative percentages were lower than 10%. In particular, the linear discriminant model perfectly classified the testing set with 0% error. Repeated iterations of training and testing gave similar results, demonstrating the reproducibility of these models. Chemometric models were able to classify the DART mass spectra of commercial cinnamon samples according to brand, with high specificity and low classification error. This method could easily be generalized to other classes of spices, and it could be applied to authenticating questioned commercial samples of spices or to examining evidence from improvised explosives. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Generalized cable equation model for myelinated nerve fiber.

    PubMed

    Einziger, Pinchas D; Livshitz, Leonid M; Mizrahi, Joseph

    2005-10-01

    Herein, the well-known cable equation for nonmyelinated axon model is extended analytically for myelinated axon formulation. The myelinated membrane conductivity is represented via the Fourier series expansion. The classical cable equation is thereby modified into a linear second order ordinary differential equation with periodic coefficients, known as Hill's equation. The general internal source response, expressed via repeated convolutions, uniformly converges provided that the entire periodic membrane is passive. The solution can be interpreted as an extended source response in an equivalent nonmyelinated axon (i.e., the response is governed by the classical cable equation). The extended source consists of the original source and a novel activation function, replacing the periodic membrane in the myelinated axon model. Hill's equation is explicitly integrated for the specific choice of piecewise constant membrane conductivity profile, thereby resulting in an explicit closed form expression for the transmembrane potential in terms of trigonometric functions. The Floquet's modes are recognized as the nerve fiber activation modes, which are conventionally associated with the nonlinear Hodgkin-Huxley formulation. They can also be incorporated in our linear model, provided that the periodic membrane point-wise passivity constraint is properly modified. Indeed, the modified condition, enforcing the periodic membrane passivity constraint on the average conductivity only leads, for the first time, to the inclusion of the nerve fiber activation modes in our novel model. The validity of the generalized transmission-line and cable equation models for a myelinated nerve fiber, is verified herein through a rigorous Green's function formulation and numerical simulations for transmembrane potential induced in three-dimensional myelinated cylindrical cell. It is shown that the dominant pole contribution of the exact modal expansion is the transmembrane potential solution of our generalized model.

  5. New methodology for mechanical characterization of human superficial facial tissue anisotropic behaviour in vivo.

    PubMed

    Then, C; Stassen, B; Depta, K; Silber, G

    2017-07-01

    Mechanical characterization of human superficial facial tissue has important applications in biomedical science, computer assisted forensics, graphics, and consumer goods development. Specifically, the latter may include facial hair removal devices. Predictive accuracy of numerical models and their ability to elucidate biomechanically relevant questions depends on the acquisition of experimental data and mechanical tissue behavior representation. Anisotropic viscoelastic behavioral characterization of human facial tissue, deformed in vivo with finite strain, however, is sparse. Employing an experimental-numerical approach, a procedure is presented to evaluate multidirectional tensile properties of superficial tissue layers of the face in vivo. Specifically, in addition to stress relaxation, displacement-controlled multi-step ramp-and-hold protocols were performed to separate elastic from inelastic properties. For numerical representation, an anisotropic hyperelastic material model in conjunction with a time domain linear viscoelasticity formulation with Prony series was employed. Model parameters were inversely derived, employing finite element models, using multi-criteria optimization. The methodology provides insight into mechanical superficial facial tissue properties. Experimental data shows pronounced anisotropy, especially with large strain. The stress relaxation rate does not depend on the loading direction, but is strain-dependent. Preconditioning eliminates equilibrium hysteresis effects and leads to stress-strain repeatability. In the preconditioned state tissue stiffness and hysteresis insensitivity to strain rate in the applied range is evident. The employed material model fits the nonlinear anisotropic elastic results and the viscoelasticity model reasonably reproduces time-dependent results. Inversely deduced maximum anisotropic long-term shear modulus of linear elasticity is G ∞,max aniso =2.43kPa and instantaneous initial shear modulus at an applied rate of ramp loading is G 0,max aniso =15.38kPa. Derived mechanical model parameters constitute a basis for complex skin interaction simulation. Copyright © 2017. Published by Elsevier Ltd.

  6. Longitudinal changes in bone lead levels: the VA Normative Aging Study

    PubMed Central

    Wilker, Elissa; Korrick, Susan; Nie, Linda H; Sparrow, David; Vokonas, Pantel; Coull, Brent; Wright, Robert O.; Schwartz, Joel; Hu, Howard

    2011-01-01

    Objective Bone lead is a cumulative measure of lead exposure that can also be remobilized. We examined repeated measures of bone lead over 11 years to characterize long-term changes and identify predictors of tibia and patella lead stores in an elderly male population. Methods Lead was measured every 3–5 years by k-x-ray fluorescence and mixed-effect models with random effects were used to evaluate change over time. Results 554 participants provided up to 4 bone lead measurements. Final models predicted a −1.4% annual decline (95%CI: −2.2,−0.7) for tibia lead and piecewise linear model for patella with an initial decline of 5.1% per year (95%CI: −6.2,−3.9) during the first 4.6 years but no significant change thereafter (−0.4% (95% CI: −2.4, 1.7)). Conclusions These results suggest that bone lead half-life may be longer than previously reported. PMID:21788910

  7. Dissipative N-point-vortex Models in the Plane

    NASA Astrophysics Data System (ADS)

    Shashikanth, Banavara N.

    2010-02-01

    A method is presented for constructing point vortex models in the plane that dissipate the Hamiltonian function at any prescribed rate and yet conserve the level sets of the invariants of the Hamiltonian model arising from the SE (2) symmetries. The method is purely geometric in that it uses the level sets of the Hamiltonian and the invariants to construct the dissipative field and is based on elementary classical geometry in ℝ3. Extension to higher-dimensional spaces, such as the point vortex phase space, is done using exterior algebra. The method is in fact general enough to apply to any smooth finite-dimensional system with conserved quantities, and, for certain special cases, the dissipative vector field constructed can be associated with an appropriately defined double Nambu-Poisson bracket. The most interesting feature of this method is that it allows for an infinite sequence of such dissipative vector fields to be constructed by repeated application of a symmetric linear operator (matrix) at each point of the intersection of the level sets.

  8. Reconstruction and Applications of Collective Storylines from Web Photo Collections

    DTIC Science & Technology

    2013-09-01

    a random surfer model as follows. α = min ( πG(s ∗)q(s∗, st−1) πG(st−1)q(st−1, s∗) , 1 ) where q(i, j ) = λw̃ij + (1− λ)πG( j ). (3.3) 25 In Eq.3.3, the...probability α in Eq.(3.3) where w̃ij is the element (i, j ) of G̃. We repeat this process until the desired numbers of training samples are selected. For...exponential of a linear summation of the functions f lj of the covariates xj with a parameter vector θl = (θl1, · · · , θlJ): log λl(ti|θl) = J ∑ j =1 θljf l j

  9. CMC-modified cellulose biointerface for antibody conjugation.

    PubMed

    Orelma, Hannes; Teerinen, Tuija; Johansson, Leena-Sisko; Holappa, Susanna; Laine, Janne

    2012-04-09

    In this Article, we present a new strategy for preparing an antihemoglobin biointerface on cellulose. The preparation method is based on functionalization of the cellulose surface by the irreversible adsorption of CMC, followed by covalent linking of antibodies to CMC. This would provide the means for affordable and stable cellulose-based biointerfaces for immunoassays. The preparation and characterization of the biointerface were studied on Langmuir-Schaefer cellulose model surfaces in real time using the quartz crystal microbalance with dissipation and surface plasmon resonance techniques. The stable attachment of antihemoglobin to adsorbed CMC was achieved, and a linear calibration of hemoglobin was obtained. CMC modification was also observed to prevent nonspecific protein adsorption. The antihemoglobin-CMC surface regenerated well, enabling repeated immunodetection cycles of hemoglobin on the same surface.

  10. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    NASA Astrophysics Data System (ADS)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  11. pSLA2-M of Streptomyces rochei is a composite linear plasmid characterized by self-defense genes and homology with pSLA2-L.

    PubMed

    Yang, Yingjie; Kurokawa, Toru; Takahama, Yoshifumi; Nindita, Yosi; Mochizuki, Susumu; Arakawa, Kenji; Endo, Satoru; Kinashi, Haruyasu

    2011-01-01

    The 113,463-bp nucleotide sequence of the linear plasmid pSLA2-M of Streptomyces rochei 7434AN4 was determined. pSLA2-M had a 69.7% overall GC content, 352-bp terminal inverted repeats with 91% (321/352) identity at both ends, and 121 open reading frames. The rightmost 14.6-kb sequence was almost (14,550/14,555) identical to that of the coexisting 211-kb linear plasmid pSLA2-L. Adjacent to this homologous region an 11.8-kb CRISPR cluster was identified, which is known to function against phage infection in prokaryotes. This cluster region as well as another one containing two large membrane protein genes (orf78 and orf79) were flanked by direct repeats of 194 and 566 bp respectively. Hence the insertion of circular DNAs containing each cluster by homologous recombination was suggested. In addition, the orf71 encoded a Ku70/Ku80-like protein, known to function in the repair of double-strand DNA breaks in eukaryotes, but disruption of it did not affect the radiation sensitivity of the mutant. A pair of replication initiation genes (orf1-orf2) were identified at the extreme left end. Thus, pSLA2-M proved to be a composite linear plasmid characterized by self-defense genes and homology with pSLA2-L that might have been generated by multiple recombination events.

  12. Implementation and validation of an improved allele specific stutter filtering method for electropherogram interpretation.

    PubMed

    Kalafut, Tim; Schuerman, Curt; Sutton, Joel; Faris, Tom; Armogida, Luigi; Bright, Jo-Anne; Buckleton, John; Taylor, Duncan

    2018-03-31

    Modern probabilistic genotyping (PG) software is capable of modeling stutter as part of the profile weighting statistic. This allows for peaks in stutter positions to be considered as allelic or stutter or both. However, prior to running any sample through a PG calculator, the examiner must first interpret the sample, considering such things as artifacts and number of contributors (NOC or N). Stutter can play a major role both during the assignment of the number of contributors, and the assessment of inclusion and exclusion. If stutter peaks are not filtered when they should be, it can lead to the assignment of an additional contributor, causing N contributors to be assigned as N + 1. If peaks in the stutter position of a major contributor are filtered using a threshold that is too high, true alleles of minor contributors can be lost. Until now, the software used to view the electropherogram stutter filters are based on a locus specific model. Combined stutter peaks occur when a peak could be the result of both back stutter (stutter one repeat shorter than the allele) and forward stutter (stutter one repeat unit larger than the allele). This can challenge existing filters. We present here a novel stutter filter model in the ArmedXpert™ software package that uses a linear model based on allele for back stutter and applies an additive filter for combined stutter. We term this the allele specific stutter model (AM). We compared AM with a traditional model based on locus specific stutter filters (termed LM). This improved stutter model has the benefit of: Instances of over filtering were reduced 78% from 101 for a traditional model (LM) to 22 for the allele specific model (AM) when scored against each other. Instances of under filtering were reduced 80% from 85 (LM) to 17 (AM) when scored against ground truth mixtures. Published by Elsevier B.V.

  13. Determination of three steroidal saponins from Ophiopogon japonicus (Liliaceae) via high-performance liquid chromatography with mass spectrometry.

    PubMed

    Wang, Yongyi; Xu, Jinzhong; Qu, Haibin

    2013-01-01

    A simple and accurate analytical method was developed for simultaneous quantification of three steroidal saponins in the roots of Ophiopogon japonicus via high-performance liquid chromatography (HPLC) with mass spectrometry (MS) in this study. Separation was performed on a Tigerkin C(18) column and detection was performed by mass spectrometry. A mobile phase consisted of 0.02% formic acid in water (v/v) and 0.02% formic acid in acetonitrile (v/v) was used with a flow rate of 0.5 mL min(-1). The quantitative HPLC-MS method was validated for linearity, precision, repeatability, stability, recovery, limits of detection and quantification. This developed method provides good linearity (r >0.9993), intra- and inter-day precisions (RSD <4.18%), repeatability (RSD <5.05%), stability (RSD <2.08%) and recovery (93.82-102.84%) for three steroidal saponins. It could be considered as a suitable quality control method for O. japonicus.

  14. Fitting ordinary differential equations to short time course data.

    PubMed

    Brewer, Daniel; Barenco, Martino; Callard, Robin; Hubank, Michael; Stark, Jaroslav

    2008-02-28

    Ordinary differential equations (ODEs) are widely used to model many systems in physics, chemistry, engineering and biology. Often one wants to compare such equations with observed time course data, and use this to estimate parameters. Surprisingly, practical algorithms for doing this are relatively poorly developed, particularly in comparison with the sophistication of numerical methods for solving both initial and boundary value problems for differential equations, and for locating and analysing bifurcations. A lack of good numerical fitting methods is particularly problematic in the context of systems biology where only a handful of time points may be available. In this paper, we present a survey of existing algorithms and describe the main approaches. We also introduce and evaluate a new efficient technique for estimating ODEs linear in parameters particularly suited to situations where noise levels are high and the number of data points is low. It employs a spline-based collocation scheme and alternates linear least squares minimization steps with repeated estimates of the noise-free values of the variables. This is reminiscent of expectation-maximization methods widely used for problems with nuisance parameters or missing data.

  15. Structural Analysis and Testing of an Erectable Truss for Precision Segmented Reflector Application

    NASA Technical Reports Server (NTRS)

    Collins, Timothy J.; Fichter, W. B.; Adams, Richard R.; Javeed, Mehzad

    1995-01-01

    This paper describes analysis and test results obtained at Langley Research Center (LaRC) on a doubly curved testbed support truss for precision reflector applications. Descriptions of test procedures and experimental results that expand upon previous investigations are presented. A brief description of the truss is given, and finite-element-analysis models are described. Static-load and vibration test procedures are discussed, and experimental results are shown to be repeatable and in generally good agreement with linear finite-element predictions. Truss structural performance (as determined by static deflection and vibration testing) is shown to be predictable and very close to linear. Vibration test results presented herein confirm that an anomalous mode observed during initial testing was due to the flexibility of the truss support system. Photogrammetric surveys with two 131-in. reference scales show that the root-mean-square (rms) truss-surface accuracy is about 0.0025 in. Photogrammetric measurements also indicate that the truss coefficient of thermal expansion (CTE) is in good agreement with that predicted by analysis. A detailed description of the photogrammetric procedures is included as an appendix.

  16. No increase in small-solute transport in peritoneal dialysis patients treated without hypertonic glucose for fifty-four months.

    PubMed

    Pagniez, Dominique; Duhamel, Alain; Boulanger, Eric; Lessore de Sainte Foy, Celia; Beuscart, Jean-Baptiste

    2017-08-31

    Glucose is widely used as an osmotic agent in peritoneal dialysis (PD), but exerts untoward effects on the peritoneum. The potential protective effect of a reduced exposure to hypertonic glucose has never been investigated. The cohort of PD patients attending our center which tackled the challenge of a restricted use of hypertonic glucose solutions has been prospectively followed since 1992. Small-solute transport was assessed using an equivalent of the glucose peritoneal equilibration test after 6 months, and then every year. Study was stopped on July 1st, 2008, before use of biocompatible solutions. Repeated measures in patients treated with PD for 54 months were analyzed by using (1) the slopes of the linear regression for D 4 /D 0 ratios over time computed for each individual, and (2) a linear mixed model. In the study period, 44 patients were treated for a total of 2376 months, 2058 without hypertonic glucose. There was one episode of peritoneal infection every 18 patient-months. The mean of slopes of the linear regression for D 4 /D 0 ratios was found to be significantly positive (Student's test, p < .001) and the results of the mixed model reflected a similar significant increase for D 4 /D 0 ratios over time. These results reflected a significant decrease of small-solute transport. In this large series, minimizing the use of hypertonic glucose solutions was associated in patients on long term PD with an overall decrease of small-solute transport within 54 months, despite a high rate of peritoneal infection.

  17. Variations in respiratory excretion of carbon dioxide can be used to calculate pulmonary blood flow.

    PubMed

    Preiss, David A; Azami, Takafumi; Urman, Richard D

    2015-02-01

    A non-invasive means of measuring pulmonary blood flow (PBF) would have numerous benefits in medicine. Traditionally, respiratory-based methods require breathing maneuvers, partial rebreathing, or foreign gas mixing because exhaled CO2 volume on a per-breath basis does not accurately represent alveolar exchange of CO2. We hypothesized that if the dilutional effect of the functional residual capacity was accounted for, the relationship between the calculated volume of CO2 removed per breath and the alveolar partial pressure of CO2 would be reversely linear. A computer model was developed that uses variable tidal breathing to calculate CO2 removal per breath at the level of the alveoli. We iterated estimates for functional residual capacity to create the best linear fit of alveolar CO2 pressure and CO2 elimination for 10 minutes of breathing and incorporated the volume of CO2 elimination into the Fick equation to calculate PBF. The relationship between alveolar pressure of CO2 and CO2 elimination produced an R(2) = 0.83. The optimal functional residual capacity differed from the "actual" capacity by 0.25 L (8.3%). The repeatability coefficient leveled at 0.09 at 10 breaths and the difference between the PBF calculated by the model and the preset blood flow was 0.62 ± 0.53 L/minute. With variations in tidal breathing, a linear relationship exists between alveolar CO2 pressure and CO2 elimination. Existing technology may be used to calculate CO2 elimination during quiet breathing and might therefore be used to accurately calculate PBF in humans with healthy lungs.

  18. Adsorption of Poly(methyl methacrylate) on Concave Al2O3 Surfaces in Nanoporous Membranes

    PubMed Central

    Nunnery, Grady; Hershkovits, Eli; Tannenbaum, Allen; Tannenbaum, Rina

    2009-01-01

    The objective of this study was to determine the influence of polymer molecular weight and surface curvature on the adsorption of polymers onto concave surfaces. Poly(methyl methacrylate) (PMMA) of various molecular weights was adsorbed onto porous aluminum oxide membranes having various pore sizes, ranging from 32 to 220 nm. The surface coverage, expressed as repeat units per unit surface area, was observed to vary linearly with molecular weight for molecular weights below ~120 000 g/mol. The coverage was independent of molecular weight above this critical molar mass, as was previously reported for the adsorption of PMMA on convex surfaces. Furthermore, the coverage varied linearly with pore size. A theoretical model was developed to describe curvature-dependent adsorption by considering the density gradient that exists between the surface and the edge of the adsorption layer. According to this model, the density gradient of the adsorbed polymer segments scales inversely with particle size, while the total coverage scales linearly with particle size, in good agreement with experiment. These results show that the details of the adsorption of polymers onto concave surfaces with cylindrical geometries can be used to calculate molecular weight (below a critical molecular weight) if pore size is known. Conversely, pore size can also be determined with similar adsorption experiments. Most significantly, for polymers above a critical molecular weight, the precise molecular weight need not be known in order to determine pore size. Moreover, the adsorption developed and validated in this work can be used to predict coverage also onto surfaces with different geometries. PMID:19415910

  19. Verification of spectrophotometric method for nitrate analysis in water samples

    NASA Astrophysics Data System (ADS)

    Kurniawati, Puji; Gusrianti, Reny; Dwisiwi, Bledug Bernanti; Purbaningtias, Tri Esti; Wiyantoko, Bayu

    2017-12-01

    The aim of this research was to verify the spectrophotometric method to analyze nitrate in water samples using APHA 2012 Section 4500 NO3-B method. The verification parameters used were: linearity, method detection limit, level of quantitation, level of linearity, accuracy and precision. Linearity was obtained by using 0 to 50 mg/L nitrate standard solution and the correlation coefficient of standard calibration linear regression equation was 0.9981. The method detection limit (MDL) was defined as 0,1294 mg/L and limit of quantitation (LOQ) was 0,4117 mg/L. The result of a level of linearity (LOL) was 50 mg/L and nitrate concentration 10 to 50 mg/L was linear with a level of confidence was 99%. The accuracy was determined through recovery value was 109.1907%. The precision value was observed using % relative standard deviation (%RSD) from repeatability and its result was 1.0886%. The tested performance criteria showed that the methodology was verified under the laboratory conditions.

  20. Probabilistic risk assessment for CO2 storage in geological formations: robust design and support for decision making under uncertainty

    NASA Astrophysics Data System (ADS)

    Oladyshkin, Sergey; Class, Holger; Helmig, Rainer; Nowak, Wolfgang

    2010-05-01

    CO2 storage in geological formations is currently being discussed intensively as a technology for mitigating CO2 emissions. However, any large-scale application requires a thorough analysis of the potential risks. Current numerical simulation models are too expensive for probabilistic risk analysis and for stochastic approaches based on brute-force repeated simulation. Even single deterministic simulations may require parallel high-performance computing. The multiphase flow processes involved are too non-linear for quasi-linear error propagation and other simplified stochastic tools. As an alternative approach, we propose a massive stochastic model reduction based on the probabilistic collocation method. The model response is projected onto a orthogonal basis of higher-order polynomials to approximate dependence on uncertain parameters (porosity, permeability etc.) and design parameters (injection rate, depth etc.). This allows for a non-linear propagation of model uncertainty affecting the predicted risk, ensures fast computation and provides a powerful tool for combining design variables and uncertain variables into one approach based on an integrative response surface. Thus, the design task of finding optimal injection regimes explicitly includes uncertainty, which leads to robust designs of the non-linear system that minimize failure probability and provide valuable support for risk-informed management decisions. We validate our proposed stochastic approach by Monte Carlo simulation using a common 3D benchmark problem (Class et al. Computational Geosciences 13, 2009). A reasonable compromise between computational efforts and precision was reached already with second-order polynomials. In our case study, the proposed approach yields a significant computational speedup by a factor of 100 compared to Monte Carlo simulation. We demonstrate that, due to the non-linearity of the flow and transport processes during CO2 injection, including uncertainty in the analysis leads to a systematic and significant shift of predicted leakage rates towards higher values compared with deterministic simulations, affecting both risk estimates and the design of injection scenarios. This implies that, neglecting uncertainty can be a strong simplification for modeling CO2 injection, and the consequences can be stronger than when neglecting several physical phenomena (e.g. phase transition, convective mixing, capillary forces etc.). The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Keywords: polynomial chaos; CO2 storage; multiphase flow; porous media; risk assessment; uncertainty; integrative response surfaces

  1. Hollow-Structured Graphene-Silicone-Composite-Based Piezoresistive Sensors: Decoupled Property Tuning and Bending Reliability.

    PubMed

    Luo, Ningqi; Huang, Yan; Liu, Jing; Chen, Shih-Chi; Wong, Ching Ping; Zhao, Ni

    2017-10-01

    A versatile flexible piezoresistive sensor should maintain high sensitivity in a wide linear range, and provide a stable and repeatable pressure reading under bending. These properties are often difficult to achieve simultaneously with conventional filler-matrix composite active materials, as tuning of one material component often results in change of multiple sensor properties. Here, a material strategy is developed to realize a 3D graphene-poly(dimethylsiloxane) hollow structure, where the electrical conductivity and mechanical elasticity of the composite can be tuned separately by varying the graphene layer number and the poly(dimethylsiloxane) composition ratio, respectively. As a result, the sensor sensitivity and linear range can be easily improved through a decoupled tuning process, reaching a sensitivity of 15.9 kPa -1 in a 60 kPa linear region, and the sensor also exhibits fast response (1.2 ms rising time) and high stability. Furthermore, by optimizing the density of the graphene percolation network and thickness of the composite, the stability and repeatability of the sensor output under bending are improved, achieving a measurement error below 6% under bending radius variations from -25 to +25 mm. Finally, the potential applications of these sensors in wearable medical devices and robotic vision are explored. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Analysis and discussion on the experimental data of electrolyte analyzer

    NASA Astrophysics Data System (ADS)

    Dong, XinYu; Jiang, JunJie; Liu, MengJun; Li, Weiwei

    2018-06-01

    In the subsequent verification of electrolyte analyzer, we found that the instrument can achieve good repeatability and stability in repeated measurements with a short period of time, in line with the requirements of verification regulation of linear error and cross contamination rate, but the phenomenon of large indication error is very common, the measurement results of different manufacturers have great difference, in order to find and solve this problem, help enterprises to improve quality of product, to obtain accurate and reliable measurement data, we conducted the experimental evaluation of electrolyte analyzer, and the data were analyzed by statistical analysis.

  3. Highly sensitive MicroRNA 146a detection using a gold nanoparticle-based CTG repeat probing system and isothermal amplification.

    PubMed

    Le, Binh Huy; Seo, Young Jun

    2018-01-25

    We have developed a gold nanoparticle (AuNP)-based CTG repeat probing system displaying high quenching capability and combined it with isothermal amplification for the detection of miRNA 146a. This method of using a AuNP-based CTG repeat probing system with isothermal amplification allowed the highly sensitive (14 aM) and selective detection of miRNA 146a. A AuNP-based CTG repeat probing system having a hairpin structure and a dT F fluorophore exhibited highly efficient quenching because the CTG repeat-based stable hairpin structure imposed a close distance between the AuNP and the dT F residue. A small amount of miRNA 146a induced multiple copies of the CAG repeat sequence during rolling circle amplification; the AuNP-based CTG repeat probing system then bound to the complementary multiple-copy CAG repeat sequence, thereby inducing a structural change from a hairpin to a linear structure with amplified fluorescence. This AuNP-based CTG probing system combined with isothermal amplification could also discriminate target miRNA 146a from one- and two-base-mismatched miRNAs (ORN 1 and ORN 2, respectively). This simple AuNP-based CTG probing system, combined with isothermal amplification to induce a highly sensitive change in fluorescence, allows the detection of miRNA 146a with high sensitivity (14 aM) and selectivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Perspectives for laboratory implementation of the Duan-Lukin-Cirac-Zoller protocol for quantum repeaters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendes, Milrian S.; Felinto, Daniel

    2011-12-15

    We analyze the efficiency and scalability of the Duan-Lukin-Cirac-Zoller (DLCZ) protocol for quantum repeaters focusing on the behavior of the experimentally accessible measures of entanglement for the system, taking into account crucial imperfections of the stored entangled states. We calculate then the degradation of the final state of the quantum-repeater linear chain for increasing sizes of the chain, and characterize it by a lower bound on its concurrence and the ability to violate the Clausner-Horne-Shimony-Holt inequality. The states are calculated up to an arbitrary number of stored excitations, as this number is not fundamentally bound for experiments involving large atomicmore » ensembles. The measurement by avalanche photodetectors is modeled by ''ON/OFF'' positive operator-valued measure operators. As a result, we are able to consistently test the approximation of the real fields by fields with a finite number of excitations, determining the minimum number of excitations required to achieve a desired precision in the prediction of the various measured quantities. This analysis finally determines the minimum purity of the initial state that is required to succeed in the protocol as the size of the chain increases. We also provide a more accurate estimate for the average time required to succeed in each step of the protocol. The minimum purity analysis and the new time estimates are then combined to trace the perspectives for implementation of the DLCZ protocol in present-day laboratory setups.« less

  5. Perspectives for laboratory implementation of the Duan-Lukin-Cirac-Zoller protocol for quantum repeaters

    NASA Astrophysics Data System (ADS)

    Mendes, Milrian S.; Felinto, Daniel

    2011-12-01

    We analyze the efficiency and scalability of the Duan-Lukin-Cirac-Zoller (DLCZ) protocol for quantum repeaters focusing on the behavior of the experimentally accessible measures of entanglement for the system, taking into account crucial imperfections of the stored entangled states. We calculate then the degradation of the final state of the quantum-repeater linear chain for increasing sizes of the chain, and characterize it by a lower bound on its concurrence and the ability to violate the Clausner-Horne-Shimony-Holt inequality. The states are calculated up to an arbitrary number of stored excitations, as this number is not fundamentally bound for experiments involving large atomic ensembles. The measurement by avalanche photodetectors is modeled by “ON/OFF” positive operator-valued measure operators. As a result, we are able to consistently test the approximation of the real fields by fields with a finite number of excitations, determining the minimum number of excitations required to achieve a desired precision in the prediction of the various measured quantities. This analysis finally determines the minimum purity of the initial state that is required to succeed in the protocol as the size of the chain increases. We also provide a more accurate estimate for the average time required to succeed in each step of the protocol. The minimum purity analysis and the new time estimates are then combined to trace the perspectives for implementation of the DLCZ protocol in present-day laboratory setups.

  6. Seismological mechanism analysis of 2015 Luanxian swarm, Hebei province,China

    NASA Astrophysics Data System (ADS)

    Tan, Yipei; Liao, Xu; Ma, Hongsheng; Zhou, Longquan; Wang, Xingzhou

    2017-04-01

    The seismological mechanism of an earthquake swarm, a kind of seismic burst activity, means the physical and dynamic process in earthquakes triggering in the swarm. Here we focus on the seismological mechanism of 2015 Luanxian swarm in Hebei province, China. The process of digital seismic waveform data processing is divided into four steps. (1) Choose the three components waveform of earthquakes in the catalog as templates, and detect missing earthquakes by scanning the continues waveforms with matched filter technique. (2) Recalibrate P and S-wave phase arrival time using waveform cross-correlation phase detection technique to eliminate the artificial error in phase picking in the observation report made by Hebei seismic network, and then we obtain a more complete catalog and a more precise seismic phase report. (3) Relocate the earthquakes in the swarm using hypoDD based on phase arrival time we recalibrated, and analyze the characteristics of swarm epicenter migration based on the earthquake relocation result. (4) Detect whether there are repeating earthquakes activity using both waveform cross-correlation standard and whether rupture areas can overlapped. We finally detect 106 missing earthquakes in the swarm, 66 of them have the magnitude greater than ML0.0, include 2 greater than ML1.0. Relocation result shows that the epicenters of earthquakes in the swarm have a strip distribution in NE-SW direction, which indicates the seismogenic structure may be a NE-SW trending fault. The spatial-temporal distribution variation of epicenters in the swarm shows a kind of two stages linear migration characteristics, in which the first stage has appeared with a higher migration velocity as 1.2 km per day, and the velocity of the second step is 0.0024 km per day. According to the three basic models to explain the seismological mechanism of earthquake swarms: cascade model, slow slip model and fluid diffusion model, repeating earthquakes activity is difficult to explain by previous earthquakes stress triggering, however, it can be explained by continuing stress loading at the same asperity from fault slow slip. The phenomena of linear migration is more fitting slow slip model than the migration characteristics of fluid diffusion which satisfied diffusion equation. Comparing the phenomena we observed and the seismological mechanism models, we find that the Luanxian earthquake swarm may be associated with fault slow slip. Fault slow slip may play a role in Luanxian earthquake swarm triggering and sustained activity.

  7. Early post-stroke cognition in stroke rehabilitation patients predicts functional outcome at 13 months.

    PubMed

    Wagle, Jørgen; Farner, Lasse; Flekkøy, Kjell; Bruun Wyller, Torgeir; Sandvik, Leiv; Fure, Brynjar; Stensrød, Brynhild; Engedal, Knut

    2011-01-01

    To identify prognostic factors associated with functional outcome at 13 months in a sample of stroke rehabilitation patients. Specifically, we hypothesized that cognitive functioning early after stroke would predict long-term functional outcome independently of other factors. 163 stroke rehabilitation patients underwent a structured neuropsychological examination 2-3 weeks after hospital admittance, and their functional status was subsequently evaluated 13 months later with the modified Rankin Scale (mRS) as outcome measure. Three predictive models were built using linear regression analyses: a biological model (sociodemographics, apolipoprotein E genotype, prestroke vascular factors, lesion characteristics and neurological stroke-related impairment); a functional model (pre- and early post-stroke cognitive functioning, personal and instrumental activities of daily living, ADL, and depressive symptoms), and a combined model (including significant variables, with p value <0.05, from the biological and functional models). A combined model of 4 variables best predicted long-term functional outcome with explained variance of 49%: neurological impairment (National Institute of Health Stroke Scale; β = 0.402, p < 0.001), age (β = 0.233, p = 0.001), post-stroke cognitive functioning (Repeatable Battery of Neuropsychological Status, RBANS; β = -0.248, p = 0.001) and prestroke personal ADL (Barthel Index; β = -0.217, p = 0.002). Further linear regression analyses of which RBANS indexes and subtests best predicted long-term functional outcome showed that Coding (β = -0.484, p < 0.001) and Figure Copy (β = -0.233, p = 0.002) raw scores at baseline explained 42% of the variance in mRS scores at follow-up. Early post-stroke cognitive functioning as measured by the RBANS is a significant and independent predictor of long-term functional post-stroke outcome. Copyright © 2011 S. Karger AG, Basel.

  8. The trend of changes in the evaluation scores of faculty members from administrators' and students' perspectives at the medical school over 10 years.

    PubMed

    Yamani, Nikoo; Changiz, Tahereh; Feizi, Awat; Kamali, Farahnaz

    2018-01-01

    To assess the trend of changes in the evaluation scores of faculty members and discrepancy between administrators' and students' perspectives in a medical school from 2006 to 2015. This repeated cross-sectional study was conducted on the 10-year evaluation scores of all faculty members of a medical school (n=579) in an urban area of Iran. Data on evaluation scores given by students and administrators and the total of these scores were evaluated. Data were analyzed using descriptive and inferential statistics including linear mixed effect models for repeated measures via the SPSS software. There were statistically significant differences between the students' and administrators' perspectives over time ( p <0.001). The mean of the total evaluation scores also showed a statistically significant change over time ( p <0.001). Furthermore, the mean of changes over time in the total evaluation score between different departments was statistically significant ( p <0.001). The trend of changes in the student's evaluations was clear and positive, but the trend of administrators' evaluation was unclear. Since the evaluation of faculty members is affected by many other factors, there is a need for more future studies.

  9. Nanostructured copper-coated solid-phase microextraction fiber for gas chromatographic analysis of dibutyl phthalate and diethylhexyl phthalate environmental estrogens.

    PubMed

    Feng, Juanjuan; Sun, Min; Bu, Yanan; Luo, Chuannan

    2015-01-01

    A novel nanostructured copper-based solid-phase microextraction fiber was developed and applied for determining the two most common types of phthalate environmental estrogens (dibutyl phthalate and diethylhexyl phthalate) in aqueous samples, coupled to gas chromatography with flame ionization detection. The copper film was coated onto a stainless-steel wire via an electroless plating process, which involved a surface activation process to improve the surface properties of the fiber. Several parameters affecting extraction efficiency such as extraction time, extraction temperature, ionic strength, desorption temperature, and desorption time were optimized by a factor-by-factor procedure to obtain the highest extraction efficiency. The as-established method showed wide linear ranges (0.05-250 μg/L). Precision of single fiber repeatability was <7.0%, and fiber-to-fiber repeatability was <10%. Limits of detection were 0.01 μg/L. The proposed method exhibited better or comparable extraction performance compared with commercial and other lab-made fibers, and excellent thermal stability and durability. The proposed method was applied successfully for the determination of model analytes in plastic soaking water. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Capillary gel electrophoresis for the quantification and purity determination of recombinant proteins in inclusion bodies.

    PubMed

    Espinosa-de la Garza, Carlos E; Perdomo-Abúndez, Francisco C; Campos-García, Víctor R; Pérez, Néstor O; Flores-Ortiz, Luis F; Medina-Rivero, Emilio

    2013-09-01

    In this work, a high-resolution CGE method for quantification and purity determination of recombinant proteins was developed, involving a single-component inclusion bodies (IBs) solubilization solution. Different recombinant proteins expressed as IBs were used to show method capabilities, using recombinant interferon-β 1b as the model protein for method validation. Method linearity was verified in the range from 0.05 to 0.40 mg/mL and a determination coefficient (r(2) ) of 0.99 was obtained. The LOQs and LODs were 0.018 and 0.006 mg/mL, respectively. RSD for protein content repeatability test was 2.29%. In addition, RSD for protein purity repeatability test was 4.24%. Method accuracy was higher than 90%. Specificity was confirmed, as the method was able to separate recombinant interferon-β 1b monomer from other aggregates and impurities. Sample content and purity was demonstrated to be stable for up to 48 h. Overall, this method is suitable for the analysis of recombinant proteins in IBs according to the attributes established on the International Conference for Harmonization guidelines. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Repeatability Modeling for Wind-Tunnel Measurements: Results for Three Langley Facilities

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Houlden, Heather P.

    2014-01-01

    Data from extensive check standard tests of seven measurement processes in three NASA Langley Research Center wind tunnels are statistically analyzed to test a simple model previously presented in 2000 for characterizing short-term, within-test and across-test repeatability. The analysis is intended to support process improvement and development of uncertainty models for the measurements. The analysis suggests that the repeatability can be estimated adequately as a function of only the test section dynamic pressure over a two-orders- of-magnitude dynamic pressure range. As expected for low instrument loading, short-term coefficient repeatability is determined by the resolution of the instrument alone (air off). However, as previously pointed out, for the highest dynamic pressure range the coefficient repeatability appears to be independent of dynamic pressure, thus presenting a lower floor for the standard deviation for all three time frames. The simple repeatability model is shown to be adequate for all of the cases presented and for all three time frames.

  12. A mixed-effects model approach for the statistical analysis of vocal fold viscoelastic shear properties.

    PubMed

    Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei

    2017-11-01

    A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Model creation of moving redox reaction boundary in agarose gel electrophoresis by traditional potassium permanganate method.

    PubMed

    Xie, Hai-Yang; Liu, Qian; Li, Jia-Hao; Fan, Liu-Yin; Cao, Cheng-Xi

    2013-02-21

    A novel moving redox reaction boundary (MRRB) model was developed for studying electrophoretic behaviors of analytes involving redox reaction on the principle of moving reaction boundary (MRB). Traditional potassium permanganate method was used to create the boundary model in agarose gel electrophoresis because of the rapid reaction rate associated with MnO(4)(-) ions and Fe(2+) ions. MRB velocity equation was proposed to describe the general functional relationship between velocity of moving redox reaction boundary (V(MRRB)) and concentration of reactant, and can be extrapolated to similar MRB techniques. Parameters affecting the redox reaction boundary were investigated in detail. Under the selected conditions, good linear relationship between boundary movement distance and time were obtained. The potential application of MRRB in electromigration redox reaction titration was performed in two different concentration levels. The precision of the V(MRRB) was studied and the relative standard deviations were below 8.1%, illustrating the good repeatability achieved in this experiment. The proposed MRRB model enriches the MRB theory and also provides a feasible realization of manual control of redox reaction process in electrophoretic analysis.

  14. Trajectories of Childhood Blood Pressure and Adult Left Ventricular Hypertrophy: The Bogalusa Heart Study.

    PubMed

    Zhang, Tao; Li, Shengxu; Bazzano, Lydia; He, Jiang; Whelton, Paul; Chen, Wei

    2018-07-01

    This longitudinal study aims to characterize longitudinal blood pressure (BP) trajectories from childhood and examine the impact of level-independent childhood BP trajectories on adult left ventricular hypertrophy (LVH) and remodeling patterns. The longitudinal cohort consisted of 1154 adults (787 whites and 367 blacks) who had repeated measurements of BP 4 to 15 times from childhood (4-19 years) to adulthood (20-51 years) and assessment of echocardiographic LV dimensions in adulthood. Model-estimated levels and linear slopes of BP at childhood age points were calculated in 1-year intervals using the growth curve parameters and their first derivatives, respectively. Linear and nonlinear curve parameters of BP showed significant race and sex differences from age 15 years onwards. Adults with LVH had higher long-term BP levels than adults with normal LVM in race-sex groups. Linear and nonlinear slope parameters of BP differed consistently and significantly between LVH and normal groups. Associations of level-independent linear slopes of systolic BP with adult LVH were significantly inverse (odds ratio=0.75-0.82; P =0.001-0.015) in preadolescent children of 4 to 9 years but significantly positive (odds ratio=1.29-1.46; P =0.001-0.008) in adolescents of 13 to 19 years, adjusting for covariates. These associations were consistent across race-sex groups. Of note, the association of childhood BP linear slopes with concentric LVH was significantly stronger than that with eccentric LVH during the adolescence period of 12 to 19 years. These observations indicate that the impact of BP trajectories on adult LVH and geometric patterns originates in childhood. Adolescence is a crucial period for the development of LVH in later life, which has implications for early prevention. © 2018 American Heart Association, Inc.

  15. Modifications to toxic CUG RNAs induce structural stability, rescue mis-splicing in a myotonic dystrophy cell model and reduce toxicity in a myotonic dystrophy zebrafish model

    DOE PAGES

    deLorimier, Elaine; Coonrod, Leslie A.; Copperman, Jeremy; ...

    2014-10-10

    In this study, CUG repeat expansions in the 3' UTR of dystrophia myotonica protein kinase ( DMPK) cause myotonic dystrophy type 1 (DM1). As RNA, these repeats elicit toxicity by sequestering splicing proteins, such as MBNL1, into protein–RNA aggregates. Structural studies demonstrate that CUG repeats can form A-form helices, suggesting that repeat secondary structure could be important in pathogenicity. To evaluate this hypothesis, we utilized structure-stabilizing RNA modifications pseudouridine (Ψ) and 2'-O-methylation to determine if stabilization of CUG helical conformations affected toxicity. CUG repeats modified with Ψ or 2'-O-methyl groups exhibited enhanced structural stability and reduced affinity for MBNL1. Molecularmore » dynamics and X-ray crystallography suggest a potential water-bridging mechanism for Ψ-mediated CUG repeat stabilization. Ψ modification of CUG repeats rescued mis-splicing in a DM1 cell model and prevented CUG repeat toxicity in zebrafish embryos. This study indicates that the structure of toxic RNAs has a significant role in controlling the onset of neuromuscular diseases.« less

  16. Cross-validation pitfalls when selecting and assessing regression and classification models.

    PubMed

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  17. Using an external surrogate for predictor model training in real-time motion management of lung tumors.

    PubMed

    Rottmann, Joerg; Berbeco, Ross

    2014-12-01

    Precise prediction of respiratory motion is a prerequisite for real-time motion compensation techniques such as beam, dynamic couch, or dynamic multileaf collimator tracking. Collection of tumor motion data to train the prediction model is required for most algorithms. To avoid exposure of patients to additional dose from imaging during this procedure, the feasibility of training a linear respiratory motion prediction model with an external surrogate signal is investigated and its performance benchmarked against training the model with tumor positions directly. The authors implement a lung tumor motion prediction algorithm based on linear ridge regression that is suitable to overcome system latencies up to about 300 ms. Its performance is investigated on a data set of 91 patient breathing trajectories recorded from fiducial marker tracking during radiotherapy delivery to the lung of ten patients. The expected 3D geometric error is quantified as a function of predictor lookahead time, signal sampling frequency and history vector length. Additionally, adaptive model retraining is evaluated, i.e., repeatedly updating the prediction model after initial training. Training length for this is gradually increased with incoming (internal) data availability. To assess practical feasibility model calculation times as well as various minimum data lengths for retraining are evaluated. Relative performance of model training with external surrogate motion data versus tumor motion data is evaluated. However, an internal-external motion correlation model is not utilized, i.e., prediction is solely driven by internal motion in both cases. Similar prediction performance was achieved for training the model with external surrogate data versus internal (tumor motion) data. Adaptive model retraining can substantially boost performance in the case of external surrogate training while it has little impact for training with internal motion data. A minimum adaptive retraining data length of 8 s and history vector length of 3 s achieve maximal performance. Sampling frequency appears to have little impact on performance confirming previously published work. By using the linear predictor, a relative geometric 3D error reduction of about 50% was achieved (using adaptive retraining, a history vector length of 3 s and with results averaged over all investigated lookahead times and signal sampling frequencies). The absolute mean error could be reduced from (2.0 ± 1.6) mm when using no prediction at all to (0.9 ± 0.8) mm and (1.0 ± 0.9) mm when using the predictor trained with internal tumor motion training data and external surrogate motion training data, respectively (for a typical lookahead time of 250 ms and sampling frequency of 15 Hz). A linear prediction model can reduce latency induced tracking errors by an average of about 50% in real-time image guided radiotherapy systems with system latencies of up to 300 ms. Training a linear model for lung tumor motion prediction with an external surrogate signal alone is feasible and results in similar performance as training with (internal) tumor motion. Particularly for scenarios where motion data are extracted from fluoroscopic imaging with ionizing radiation, this may alleviate the need for additional imaging dose during the collection of model training data.

  18. Tandem repeats of the 5' non-transcribed spacer of Tetrahymena rDNA function as high copy number autonomous replicons in the macronucleus but do not prevent rRNA gene dosage regulation.

    PubMed Central

    Pan, W J; Blackburn, E H

    1995-01-01

    The rRNA genes in the somatic macronucleus of Tetrahymena thermophila are normally on 21 kb linear palindromic molecules (rDNA). We examined the effect on rRNA gene dosage of transforming T.thermophila macronuclei with plasmid constructs containing a pair of tandemly repeated rDNA replication origin regions unlinked to the rRNA gene. A significant proportion of the plasmid sequences were maintained as high copy circular molecules, eventually consisting solely of tandem arrays of origin regions. As reported previously for cells transformed by a construct in which the same tandem rDNA origins were linked to the rRNA gene [Yu, G.-L. and Blackburn, E. H. (1990) Mol. Cell. Biol., 10, 2070-2080], origin sequences recombined to form linear molecules bearing several tandem repeats of the origin region, as well as rRNA genes. The total number of rDNA origin sequences eventually exceeded rRNA gene copies by approximately 20- to 40-fold and the number of circular replicons carrying only rDNA origin sequences exceeded rRNA gene copies by 2- to 3-fold. However, the rRNA gene dosage was unchanged. Hence, simply monitoring the total number of rDNA origin regions is not sufficient to regulate rRNA gene copy number. Images PMID:7784211

  19. Effects of repeated snowboard exercise in virtual reality with time lags of visual scene behind body rotation on head stability and subjective slalom run performance in healthy young subjects.

    PubMed

    Wada, Yoshiro; Nishiike, Suetaka; Kitahara, Tadashi; Yamanaka, Toshiaki; Imai, Takao; Ito, Taeko; Sato, Go; Matsuda, Kazunori; Kitamura, Yoshiaki; Takeda, Noriaki

    2016-11-01

    After repeated snowboard exercises in the virtual reality (VR) world with increasing time lags in trials 3-8, it is suggested that the adaptation to repeated visual-vestibulosomatosensory conflict in the VR world improved dynamic posture control and motor performance in the real world without the development of motion sickness. The VR technology was used and the effects of repeated snowboard exercise examined in the VR world with time lags between visual scene and body rotation on the head stability and slalom run performance during exercise in healthy subjects. Forty-two healthy young subjects participated in the study. After trials 1 and 2 of snowboard exercise in the VR world without time lag, trials 3-8 were conducted with 0.1, 0.2, 0.3, 0.4, 0.5, and 0.6 s time lags of the visual scene that the computer creates behind board rotation, respectively. Finally, trial 9 was conducted without time lag. Head linear accelerations and subjective slalom run performance were evaluated. The standard deviations of head linear accelerations in inter-aural direction were significantly increased in trial 8, with a time lag of 0.6 s, but significantly decreased in trial 9 without a time lag, compared with those in trial 2 without a time lag. The subjective scores of slalom run performance were significantly decreased in trial 8, with a time lag of 0.6 s, but significantly increased in trial 9 without a time lag, compared with those in trial 2 without a time lag. Motion sickness was not induced in any subjects.

  20. Reliability of tonosafe disposable tonometer prisms: clinical implications from the Veterans Affairs Boston Healthcare System Quality Assurance Study.

    PubMed

    Thomas, V; Daly, M K; Cakiner-Egilmez, T; Baker, E

    2011-05-01

    Given the Veterans Affairs Boston Healthcare System's recent introduction of single-use Tonosafe disposable tonometer prisms as an alternative to Goldmann applanation tonometers (GATs), this study had two aims: to conduct a large-scale quality assurance trial to assess the reliability of intraocular pressure (IOP) measurements of the Tonosafe disposable tonometer compared with GAT, particularly at extremes of pressure; to evaluate the suitability of Tonosafe disposable tonometer prisms as an acceptable substitute for GATs and for clinic-wide implementation in an academic tertiary referral setting. Ophthalmology resident physicians measured the IOPs of patients in general and specialty eye clinics with the Tonosafe disposable tonometer and GAT. Tonosafe test-retest reliability data were also collected. A retrospective review of patient charts and data analysis were performed to determine the reliability of measurements. The IOPs of 652 eyes (326 patients) were measured with both GAT and Tonosafe, with a range of 3-34 mm Hg. Linear regression analysis showed R=0.93, slope=0.91, both of which supported the proposed hypothesis, and the y-intercept=-1.05 was significantly different from the hypothesized value. The Tonosafe test-retest repeatability (40 eyes of 40 patients), r=0.977, was very high, which was further supported by linear regression slope=0.993, y-intercept=0.118, and a Tonosafe repeatability coefficient of 2.06, similar to GAT repeatability. The IOP measurements by Tonosafe disposable prisms correlated closely with Goldmann measurements, with similar repeated measurement variability to GAT. This suggests that the Tonosafe is an acceptable substitute for GAT to measure IOP in ophthalmology clinic settings.

  1. Reliability of tonosafe disposable tonometer prisms: clinical implications from the Veterans Affairs Boston Healthcare System Quality Assurance Study

    PubMed Central

    Thomas, V; Daly, M K; Cakiner-Egilmez, T; Baker, E

    2011-01-01

    Purpose Given the Veterans Affairs Boston Healthcare System's recent introduction of single-use Tonosafe disposable tonometer prisms as an alternative to Goldmann applanation tonometers (GATs), this study had two aims: to conduct a large-scale quality assurance trial to assess the reliability of intraocular pressure (IOP) measurements of the Tonosafe disposable tonometer compared with GAT, particularly at extremes of pressure; to evaluate the suitability of Tonosafe disposable tonometer prisms as an acceptable substitute for GATs and for clinic-wide implementation in an academic tertiary referral setting. Methods Ophthalmology resident physicians measured the IOPs of patients in general and specialty eye clinics with the Tonosafe disposable tonometer and GAT. Tonosafe test–retest reliability data were also collected. A retrospective review of patient charts and data analysis were performed to determine the reliability of measurements. Results The IOPs of 652 eyes (326 patients) were measured with both GAT and Tonosafe, with a range of 3–34 mm Hg. Linear regression analysis showed R=0.93, slope=0.91, both of which supported the proposed hypothesis, and the y-intercept=−1.05 was significantly different from the hypothesized value. The Tonosafe test–retest repeatability (40 eyes of 40 patients), r=0.977, was very high, which was further supported by linear regression slope=0.993, y-intercept=0.118, and a Tonosafe repeatability coefficient of 2.06, similar to GAT repeatability. Conclusions The IOP measurements by Tonosafe disposable prisms correlated closely with Goldmann measurements, with similar repeated measurement variability to GAT. This suggests that the Tonosafe is an acceptable substitute for GAT to measure IOP in ophthalmology clinic settings. PMID:21455241

  2. Seasonal Effect on Ocular Sun Exposure and Conjunctival UV Autofluorescence.

    PubMed

    Haworth, Kristina M; Chandler, Heather L

    2017-02-01

    To evaluate feasibility and repeatability of measures for ocular sun exposure and conjunctival ultraviolet autofluorescence (UVAF), and to test for relationships between the outcomes. Fifty volunteers were seen for two visits 14 ± 2 days apart. Ocular sun exposure was estimated over a 2-week time period using questionnaires that quantified time outdoors and ocular protection habits. Conjunctival UVAF was imaged using a Nikon D7000 camera system equipped with appropriate flash and filter system; image analysis was done using ImageJ software. Repeatability estimates were made using Bland-Altman plots with mean differences and 95% limits of agreement calculated. Non-normally distributed data was transformed by either log10 or square root methods. Linear regression was conducted to evaluate relationships between measures. Mean (±SD) values for ocular sun exposure and conjunctival UVAF were 8.86 (±11.97) hours and 9.15 (±9.47) mm, respectively. Repeatability was found to be acceptable for both ocular sun exposure and conjunctival UVAF. Univariate linear regression showed outdoor occupation to be a predictor of higher ocular sun exposure; outdoor occupation and winter season of collection both predicted higher total UVAF. Furthermore, increased portion of day spent outdoors while working was associated with increased total conjunctival UVAF. We demonstrate feasibility and repeatability of estimating ocular sun exposure using a previously unreported method and for conjunctival UVAF in a group of subjects residing in Ohio. Seasonal temperature variation may have influenced time outdoors and ultimately calculation of ocular sun exposure. As winter season of collection and outdoor occupation both predicted higher total UVAF, our data suggests that ocular sun exposure is associated with conjunctival UVAF and, possibly, that UVAF remains for at least several months after sun exposure.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohlgemuth, J.; Bokria, J.; Gu, X.

    Polymeric encapsulation materials may a change size when processed at typical module lamination temperatures. The relief of residual strain, trapped during the manufacture of encapsulation sheet, can affect module performance and reliability. For example, displaced cells and interconnects threaten: cell fracture; broken interconnects (open circuits and ground faults); delamination at interfaces; and void formation. A standardized test for the characterization of change in linear dimensions of encapsulation sheet has been developed and verified. The IEC 62788-1-5 standard quantifies the maximum change in linear dimensions that may occur to allow for process control of size change. Developments incorporated into the Committeemore » Draft (CD) of the standard as well as the assessment of the repeatability and reproducibility of the test method are described here. No pass/fail criteria are given in the standard, rather a repeatable protocol to quantify the change in dimension is provided to aid those working with encapsulation. The round-robin experiment described here identified that the repeatability and reproducibility of measurements is on the order of 1%. Recent refinements to the test procedure to improve repeatability and reproducibility include: the use of a convection oven to improve the thermal equilibration time constant and its uniformity; well-defined measurement locations reduce the effects of sampling size -and location- relative to the specimen edges; a standardized sand substrate may be readily obtained to reduce friction that would otherwise complicate the results; specimen sampling is defined, so that material is examined at known sites across the width and length of rolls; and encapsulation should be examined at the manufacturer’s recommended processing temperature, except when a cross-linking reaction may limit the size change. EVA, for example, should be examined 100 °C, between its melt transition (occurring up to 80 °C) and the onset of cross-linking (often at 100 °C).« less

  4. Seasonal Effect on Ocular Sun Exposure and Conjunctival UV Autofluorescence

    PubMed Central

    Haworth, Kristina M.; Chandler, Heather L.

    2016-01-01

    Purpose To evaluate feasibility and repeatability of measures for ocular sun exposure and conjunctival ultraviolet autofluorescence (UVAF), and to test for relationships between the outcomes. Methods Fifty volunteers were seen for 2 visits 14±2 days apart. Ocular sun exposure was estimated over a two-week time period using questionnaires that quantified time outdoors and ocular protection habits. Conjunctival UVAF was imaged using a Nikon D7000 camera system equipped with appropriate flash and filter system; image analysis was done using ImageJ software. Repeatability estimates were made using Bland-Altman plots with mean differences and 95% limits of agreement calculated. Non-normally distributed data was transformed by either log10 or square root methods. Linear regression was conducted to evaluate relationships between measures. Results Mean (±SD) values for ocular sun exposure and conjunctival UVAF were 8.86 (±11.97) hours and 9.15 (±9.47) mm2, respectively. Repeatability was found to be acceptable for both ocular sun exposure and conjunctival UVAF. Univariate linear regression showed outdoor occupation to be a predictor of higher ocular sun exposure; outdoor occupation and winter season of collection both predicted higher total UVAF. Furthermore, increased portion of day spent outdoors while working was associated with increased total conjunctival UVAF. Conclusions We demonstrate feasibility and repeatability of estimating ocular sun exposure using a previously unreported method and for conjunctival UVAF in a group of subjects residing in Ohio. Seasonal temperature variation may have influenced time outdoors and ultimately calculation of ocular sun exposure. As winter season of collection and outdoor occupation both predicted higher total UVAF, our data suggests that ocular sun exposure is associated with conjunctival UVAF and possibly, that UVAF remains for at least several months following sun exposure. PMID:27820717

  5. General radiographic attributes of optically stimulated luminescence dosimeters: A basic insight

    NASA Astrophysics Data System (ADS)

    Musa, Y.; Hashim, S.; Ghoshal, S. K.; Bradley, D. A.; Ahmad, N. E.; Karim, M. K. A.; Hashim, A.; Kadir, A. B. A.

    2018-06-01

    We report the ubiquitous radiographic characteristics of optically stimulated luminescence dosimeters (OSLD) so called nanoDot OSLDs (Landauer Inc., Glendwood, IL). The X-ray irradiations were performed in free air ambiance to inspect the repeatability, the reproducibility, the signal depletion, the element correction factors (ECFs), the dose response and the energy dependence. Repeatability of multiple readouts after single irradiation to 10 mGy revealed a coefficient of variation below 3%, while the reproducibility in repeated irradiation-readout-annealing cycles was above 2%. The OSL signal depletion for three nanoDots with simultaneous irradiation to 20 mGy and sequential readouts of 25 times displayed a consistent signal reduction ≈0.5% per readout with R2 values over 0.98. ECFs for individual OSLDs were varied from 0.97 to 1.03. In the entire dose range under 80 kV, a good linearity with an R2 exceeding 0.99 was achieved. Besides, the percentage difference between OSLD and ion-chamber dose was less than 5%, which was superior to TLD. The X-ray photon irradiated energy response factors (between 0.76 and 1.12) in the range of 40-150 kV (26.1-61.2 keV) exhibited significant energy dependence. Indeed, the nanoDot OSLDs disclosed good repeatability, reproducibility and linearity. The OSLDs measured doses were closer to ion-chamber doses than that of TLD. It can be further improved up to ≈3% by applying the individual dosimeter ECF. On top, the energy dependent uncertainties can be minimized using the energy correction factors. It is established that the studied nanoDot OSLDs are prospective for measuring entrance dose in general radiographic practices.

  6. Evaluation of goal kicking performance in international rugby union matches.

    PubMed

    Quarrie, Kenneth L; Hopkins, Will G

    2015-03-01

    Goal kicking is an important element in rugby but has been the subject of minimal research. To develop and apply a method to describe the on-field pattern of goal-kicking and rank the goal kicking performance of players in international rugby union matches. Longitudinal observational study. A generalized linear mixed model was used to analyze goal-kicking performance in a sample of 582 international rugby matches played from 2002 to 2011. The model adjusted for kick distance, kick angle, a rating of the importance of each kick, and venue-related conditions. Overall, 72% of the 6769 kick attempts were successful. Forty-five percent of points scored during the matches resulted from goal kicks, and in 5.7% of the matches the result of the match hinged on the outcome of a kick attempt. There was an extremely large decrease in success with increasing distance (odds ratio for two SD distance 0.06, 90% confidence interval 0.05-0.07) and a small decrease with increasingly acute angle away from the mid-line of the goal posts (odds ratio for 2 SD angle, 0.44, 0.39-0.49). Differences between players were typically small (odds ratio for 2 between-player SD 0.53, 0.45-0.65). The generalized linear mixed model with its random-effect solutions provides a tool for ranking the performance of goal kickers in rugby. This modelling approach could be applied to other performance indicators in rugby and in other sports in which discrete outcomes are measured repeatedly on players or teams. Copyright © 2015. Published by Elsevier Ltd.

  7. Developing a computer-controlled simulated digestion system to predict the concentration of metabolizable energy of feedstuffs for rooster.

    PubMed

    Zhao, F; Ren, L Q; Mi, B M; Tan, H Z; Zhao, J T; Li, H; Zhang, H F; Zhang, Z Y

    2014-04-01

    Four experiments were conducted to evaluate the effectiveness of a computer-controlled simulated digestion system (CCSDS) for predicting apparent metabolizable energy (AME) and true metabolizable energy (TME) using in vitro digestible energy (IVDE) content of feeds for roosters. In Exp. 1, the repeatability of the IVDE assay was tested in corn, wheat, rapeseed meal, and cottonseed meal with 3 assays of each sample and each with 5 replicates of the same sample. In Exp. 2, the additivity of IVDE concentration in corn, soybean meal, and cottonseed meal was tested by comparing determined IVDE values of the complete diet with values predicted from measurements on individual ingredients. In Exp. 3, linear models to predict AME and TME based on IVDE were developed with 16 calibration samples. In Exp. 4, the accuracy of prediction models was tested by the differences between predicted and determined values for AME or TME of 6 ingredients and 4 diets. In Exp. 1, the mean CV of IVDE was 0.88% (range = 0.20 to 2.14%) for corn, wheat, rapeseed meal, and cottonseed meal. No difference in IVDE was observed between 3 assays of an ingredient, indicating that the IVDE assay is repeatable under these conditions. In Exp. 2, minimal differences (<21 kcal/kg) were observed between determined and calculated IVDE of 3 complete diets formulated with corn, soybean meal, and cottonseed meal, demonstrating that the IVDE values are additive in a complete diet. In Exp. 3, linear relationships between AME and IVDE and between TME and IVDE were observed in 16 calibration samples: AME = 1.062 × IVDE - 530 (R(2) = 0.97, residual standard deviation [RSD] = 146 kcal/kg, P < 0.001) and TME = 1.050 × IVDE - 16 (R(2) = 0.97, RSD = 148 kcal/kg, P < 0.001). Differences of less than 100 kcal/kg were observed between determined and predicted values in 10 and 9 of the 16 calibration samples for AME and TME, respectively. In Exp. 4, differences of less than 100 kcal/kg between determined and predicted values were observed in 3 and 4 of the 6 ingredient samples for AME and TME, respectively, and all 4 diets showed the differences of less than 25 kcal/kg between determined and predicted AME or TME. Our results indicate that the CCSDS is repeatable and additive. This system accurately predicted AME or TME on 17 of the 26 samples and may be a promising method to predict the energetic values of feed for poultry.

  8. Greater neurobehavioral deficits occur in adult mice after repeated, as compared to single, mild traumatic brain injury (mTBI).

    PubMed

    Nichols, Jessica N; Deshane, Alok S; Niedzielko, Tracy L; Smith, Cory D; Floyd, Candace L

    2016-02-01

    Mild traumatic brain injury (mTBI) accounts for the majority of all brain injuries and affected individuals typically experience some extent of cognitive and/or neuropsychiatric deficits. Given that repeated mTBIs often result in worsened prognosis, the cumulative effect of repeated mTBIs is an area of clinical concern and on-going pre-clinical research. Animal models are critical in elucidating the underlying mechanisms of single and repeated mTBI-associated deficits, but the neurobehavioral sequelae produced by these models have not been well characterized. Thus, we sought to evaluate the behavioral changes incurred after single and repeated mTBIs in mice utilizing a modified impact-acceleration model. Mice in the mTBI group received 1 impact while the repeated mTBI group received 3 impacts with an inter-injury interval of 24h. Classic behavior evaluations included the Morris water maze (MWM) to assess learning and memory, elevated plus maze (EPM) for anxiety, and forced swim test (FST) for depression/helplessness. Additionally, species-typical behaviors were evaluated with the marble-burying and nestlet shredding tests to determine motivation and apathy. Non-invasive vibration platforms were used to examine sleep patterns post-mTBI. We found that the repeated mTBI mice demonstrated deficits in MWM testing and poorer performance on species-typical behaviors. While neither single nor repeated mTBI affected behavior in the EPM or FST, sleep disturbances were observed after both single and repeated mTBI. Here, we conclude that behavioral alterations shown after repeated mTBI resemble several of the deficits or disturbances reported by patients, thus demonstrating the relevance of this murine model to study repeated mTBIs. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. GWM-a ground-water management process for the U.S. Geological Survey modular ground-water model (MODFLOW-2000)

    USGS Publications Warehouse

    Ahlfeld, David P.; Barlow, Paul M.; Mulligan, Anne E.

    2005-01-01

    GWM is a Ground?Water Management Process for the U.S. Geological Survey modular three?dimensional ground?water model, MODFLOW?2000. GWM uses a response?matrix approach to solve several types of linear, nonlinear, and mixed?binary linear ground?water management formulations. Each management formulation consists of a set of decision variables, an objective function, and a set of constraints. Three types of decision variables are supported by GWM: flow?rate decision variables, which are withdrawal or injection rates at well sites; external decision variables, which are sources or sinks of water that are external to the flow model and do not directly affect the state variables of the simulated ground?water system (heads, streamflows, and so forth); and binary variables, which have values of 0 or 1 and are used to define the status of flow?rate or external decision variables. Flow?rate decision variables can represent wells that extend over one or more model cells and be active during one or more model stress periods; external variables also can be active during one or more stress periods. A single objective function is supported by GWM, which can be specified to either minimize or maximize the weighted sum of the three types of decision variables. Four types of constraints can be specified in a GWM formulation: upper and lower bounds on the flow?rate and external decision variables; linear summations of the three types of decision variables; hydraulic?head based constraints, including drawdowns, head differences, and head gradients; and streamflow and streamflow?depletion constraints. The Response Matrix Solution (RMS) Package of GWM uses the Ground?Water Flow Process of MODFLOW to calculate the change in head at each constraint location that results from a perturbation of a flow?rate variable; these changes are used to calculate the response coefficients. For linear management formulations, the resulting matrix of response coefficients is then combined with other components of the linear management formulation to form a complete linear formulation; the formulation is then solved by use of the simplex algorithm, which is incorporated into the RMS Package. Nonlinear formulations arise for simulated conditions that include water?table (unconfined) aquifers or head?dependent boundary conditions (such as streams, drains, or evapotranspiration from the water table). Nonlinear formulations are solved by sequential linear programming; that is, repeated linearization of the nonlinear features of the management problem. In this approach, response coefficients are recalculated for each iteration of the solution process. Mixed?binary linear (or mildly nonlinear) formulations are solved by use of the branch and bound algorithm, which is also incorporated into the RMS Package. Three sample problems are provided to demonstrate the use of GWM for typical ground?water flow management problems. These sample problems provide examples of how GWM input files are constructed to specify the decision variables, objective function, constraints, and solution process for a GWM run. The GWM Process runs with the MODFLOW?2000 Global and Ground?Water Flow Processes, but in its current form GWM cannot be used with the Observation, Sensitivity, Parameter?Estimation, or Ground?Water Transport Processes. The GWM Process is written with a modular structure so that new objective functions, constraint types, and solution algorithms can be added.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rottmann, Joerg; Berbeco, Ross

    Purpose: Precise prediction of respiratory motion is a prerequisite for real-time motion compensation techniques such as beam, dynamic couch, or dynamic multileaf collimator tracking. Collection of tumor motion data to train the prediction model is required for most algorithms. To avoid exposure of patients to additional dose from imaging during this procedure, the feasibility of training a linear respiratory motion prediction model with an external surrogate signal is investigated and its performance benchmarked against training the model with tumor positions directly. Methods: The authors implement a lung tumor motion prediction algorithm based on linear ridge regression that is suitable tomore » overcome system latencies up to about 300 ms. Its performance is investigated on a data set of 91 patient breathing trajectories recorded from fiducial marker tracking during radiotherapy delivery to the lung of ten patients. The expected 3D geometric error is quantified as a function of predictor lookahead time, signal sampling frequency and history vector length. Additionally, adaptive model retraining is evaluated, i.e., repeatedly updating the prediction model after initial training. Training length for this is gradually increased with incoming (internal) data availability. To assess practical feasibility model calculation times as well as various minimum data lengths for retraining are evaluated. Relative performance of model training with external surrogate motion data versus tumor motion data is evaluated. However, an internal–external motion correlation model is not utilized, i.e., prediction is solely driven by internal motion in both cases. Results: Similar prediction performance was achieved for training the model with external surrogate data versus internal (tumor motion) data. Adaptive model retraining can substantially boost performance in the case of external surrogate training while it has little impact for training with internal motion data. A minimum adaptive retraining data length of 8 s and history vector length of 3 s achieve maximal performance. Sampling frequency appears to have little impact on performance confirming previously published work. By using the linear predictor, a relative geometric 3D error reduction of about 50% was achieved (using adaptive retraining, a history vector length of 3 s and with results averaged over all investigated lookahead times and signal sampling frequencies). The absolute mean error could be reduced from (2.0 ± 1.6) mm when using no prediction at all to (0.9 ± 0.8) mm and (1.0 ± 0.9) mm when using the predictor trained with internal tumor motion training data and external surrogate motion training data, respectively (for a typical lookahead time of 250 ms and sampling frequency of 15 Hz). Conclusions: A linear prediction model can reduce latency induced tracking errors by an average of about 50% in real-time image guided radiotherapy systems with system latencies of up to 300 ms. Training a linear model for lung tumor motion prediction with an external surrogate signal alone is feasible and results in similar performance as training with (internal) tumor motion. Particularly for scenarios where motion data are extracted from fluoroscopic imaging with ionizing radiation, this may alleviate the need for additional imaging dose during the collection of model training data.« less

  11. The mismatch repair system protects against intergenerational GAA repeat instability in a Friedreich ataxia mouse model.

    PubMed

    Ezzatizadeh, Vahid; Pinto, Ricardo Mouro; Sandi, Chiranjeevi; Sandi, Madhavi; Al-Mahdawi, Sahar; Te Riele, Hein; Pook, Mark A

    2012-04-01

    Friedreich ataxia (FRDA) is an autosomal recessive neurodegenerative disorder caused by a dynamic GAA repeat expansion mutation within intron 1 of the FXN gene. Studies of mouse models for other trinucleotide repeat (TNR) disorders have revealed an important role of mismatch repair (MMR) proteins in TNR instability. To explore the potential role of MMR proteins on intergenerational GAA repeat instability in FRDA, we have analyzed the transmission of unstable GAA repeat expansions from FXN transgenic mice which have been crossed with mice that are deficient for Msh2, Msh3, Msh6 or Pms2. We find in all cases that absence of parental MMR protein not only maintains transmission of GAA expansions and contractions, but also increases GAA repeat mutability (expansions and/or contractions) in the offspring. This indicates that Msh2, Msh3, Msh6 and Pms2 proteins are not the cause of intergenerational GAA expansions or contractions, but act in their canonical MMR capacity to protect against GAA repeat instability. We further identified differential modes of action for the four MMR proteins. Thus, Msh2 and Msh3 protect against GAA repeat contractions, while Msh6 protects against both GAA repeat expansions and contractions, and Pms2 protects against GAA repeat expansions and also promotes contractions. Furthermore, we detected enhanced occupancy of Msh2 and Msh3 proteins downstream of the FXN expanded GAA repeat, suggesting a model in which Msh2/3 dimers are recruited to this region to repair mismatches that would otherwise produce intergenerational GAA contractions. These findings reveal substantial differences in the intergenerational dynamics of expanded GAA repeat sequences compared with expanded CAG/CTG repeats, where Msh2 and Msh3 are thought to actively promote repeat expansions. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. The mismatch repair system protects against intergenerational GAA repeat instability in a Friedreich ataxia mouse model

    PubMed Central

    Ezzatizadeh, Vahid; Pinto, Ricardo Mouro; Sandi, Chiranjeevi; Sandi, Madhavi; Al-Mahdawi, Sahar; te Riele, Hein; Pook, Mark A.

    2013-01-01

    Friedreich ataxia (FRDA) is an autosomal recessive neurodegenerative disorder caused by a dynamic GAA repeat expansion mutation within intron 1 of the FXN gene. Studies of mouse models for other trinucleotide repeat (TNR) disorders have revealed an important role of mismatch repair (MMR) proteins in TNR instability. To explore the potential role of MMR proteins on intergenerational GAA repeat instability in FRDA, we have analyzed the transmission of unstable GAA repeat expansions from FXN transgenic mice which have been crossed with mice that are deficient for Msh2, Msh3, Msh6 or Pms2. We find in all cases that absence of parental MMR protein not only maintains transmission of GAA expansions and contractions, but also increases GAA repeat mutability (expansions and/or contractions) in the offspring. This indicates that Msh2, Msh3, Msh6 and Pms2 proteins are not the cause of intergenerational GAA expansions or contractions, but act in their canonical MMR capacity to protect against GAA repeat instability. We further identified differential modes of action for the four MMR proteins. Thus, Msh2 and Msh3 protect against GAA repeat contractions, while Msh6 protects against both GAA repeat expansions and contractions, and Pms2 protects against GAA repeat expansions and also promotes contractions. Furthermore, we detected enhanced occupancy of Msh2 and Msh3 proteins downstream of the FXN expanded GAA repeat, suggesting a model in which Msh2/3 dimers are recruited to this region to repair mismatches that would otherwise produce intergenerational GAA contractions. These findings reveal substantial differences in the intergenerational dynamics of expanded GAA repeat sequences compared with expanded CAG/CTG repeats, where Msh2 and Msh3 are thought to actively promote repeat expansions. PMID:22289650

  13. Spatial enhancement of ECG using diagnostic similarity score based lead selective multi-scale linear model.

    PubMed

    Nallikuzhy, Jiss J; Dandapat, S

    2017-06-01

    In this work, a new patient-specific approach to enhance the spatial resolution of ECG is proposed and evaluated. The proposed model transforms a three-lead ECG into a standard twelve-lead ECG thereby enhancing its spatial resolution. The three leads used for prediction are obtained from the standard twelve-lead ECG. The proposed model takes advantage of the improved inter-lead correlation in wavelet domain. Since the model is patient-specific, it also selects the optimal predictor leads for a given patient using a lead selection algorithm. The lead selection algorithm is based on a new diagnostic similarity score which computes the diagnostic closeness between the original and the spatially enhanced leads. Standard closeness measures are used to assess the performance of the model. The similarity in diagnostic information between the original and the spatially enhanced leads are evaluated using various diagnostic measures. Repeatability and diagnosability are performed to quantify the applicability of the model. A comparison of the proposed model is performed with existing models that transform a subset of standard twelve-lead ECG into the standard twelve-lead ECG. From the analysis of the results, it is evident that the proposed model preserves diagnostic information better compared to other models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Elastic collisions of classical point particles on a finite frictionless linear track with perfectly reflecting endpoints

    NASA Astrophysics Data System (ADS)

    DeLuca, R.

    2006-03-01

    Repeated elastic collisions of point particles on a finite frictionless linear track with perfectly reflecting endpoints are considered. The problem is analysed by means of an elementary linear algebra approach. It is found that, starting with a state consisting of a projectile particle in motion at constant velocity and a target particle at rest in a fixed known position, the points at which collisions occur on track, when plotted versus progressive numerals, corresponding to the collisions themselves, show periodic patterns for a rather large choice of values of the initial position x(0) and on the mass ratio r. For certain values of these parameters, however, only regular behaviour over a large number of collisions is detected.

  15. Dose-Response for Multiple Biomarkers of Exposure and Genotoxic Effect Following Repeated Treatment of Rats with the Alkylating Agents, MMS and MNU.

    PubMed

    Ji, Zhiying; LeBaron, Matthew J; Schisler, Melissa R; Zhang, Fagen; Bartels, Michael J; Gollapudi, B Bhaskar; Pottenger, Lynn H

    2016-05-01

    The nature of the dose-response relationship for various in vivo endpoints of exposure and effect were investigated using the alkylating agents, methyl methanesulfonate (MMS) and methylnitrosourea (MNU). Six male F344 rats/group were dosed orally with 0, 0.5, 1, 5, 25 or 50mg/kg bw/day (mkd) of MMS, or 0, 0.01, 0.1, 1, 5, 10, 25 or 50 mkd of MNU, for 4 consecutive days and sacrificed 24h after the last dose. The dose-responses for multiple biomarkers of exposure and genotoxic effect were investigated. In MMS-treated rats, the hemoglobin adduct level, a systemic exposure biomarker, increased linearly with dose (r (2) = 0.9990, P < 0.05), indicating the systemic availability of MMS; however, the N7MeG DNA adduct, a target exposure biomarker, exhibited a non-linear dose-response in blood and liver tissues. Blood reticulocyte micronuclei (MN), a genotoxic effect biomarker, exhibited a clear no-observed-genotoxic-effect-level (NOGEL) of 5 mkd as a point of departure (PoD) for MMS. Two separate dose-response models, the Lutz and Lutz model and the stepwise approach using PROC REG both supported a bilinear/threshold dose-response for MN induction. Liver gene expression, a mechanistic endpoint, also exhibited a bilinear dose-response. Similarly, in MNU-treated rats, hepatic DNA adducts, gene expression changes and MN all exhibited clear PoDs, with a NOGEL of 1 mkd for MN induction, although dose-response modeling of the MNU-induced MN data showed a better statistical fit for a linear dose-response. In summary, these results provide in vivo data that support the existence of clear non-linear dose-responses for a number of biologically significant events along the pathway for genotoxicity induced by DNA-reactive agents. © The Author 2015. Published by Oxford University Press on behalf of the UK Environmental Mutagen Society. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Biological X-ray irradiator characterization for use with small animals and cells.

    PubMed

    Bruno, A Colello; Mazaro, S J; Amaral, L L; Rego, E M; Oliveira, H F; Pavoni, J F

    2017-03-02

    This study presents the characterization of an X-ray irradiator through dosimetric tests, which confirms the actual dose rate that small animals and cells will be exposed to during radiobiological experiments. We evaluated the linearity, consistency, repeatability, and dose distribution in the positions in which the animals or cells are placed during irradiation. In addition, we evaluated the performance of the X-ray tube (voltage and tube operating current), the radiometric survey (leakage radiation) and safety devices. The irradiator default setting was established as 160 kV and 25 mA. Tests showed that the dose rate was linear overtime (R2=1) and remained stable for long (constant) and short (repeatability) intervals between readings. The mean dose rate inside the animal cages was 1.27±0.06 Gy/min with a uniform beam of 95.40% (above the minimum threshold guaranteed by the manufacturer). The mean dose rate inside the cell plates was 0.92±0.19 Gy/min. The dose rate dependence with tube voltage and current presented a quadratic and linear relationship, respectively. There was no observed mechanical failure during evaluation of the irradiator safety devices and the radiometric survey obtained a maximum ambient equivalent dose rate of 0.26 mSv/h, which exempts it from the radiological protection requirements of the International Atomic Energy Agency. The irradiator characterization enables us to perform radiobiological experiments, and assists or even replaces traditional therapy equipment (e.g., linear accelerators) for cells and small animal irradiation, especially in early research stages.

  17. Circular RNAs are abundant, conserved, and associated with ALU repeats

    PubMed Central

    Jeck, William R.; Sorrentino, Jessica A.; Wang, Kai; Slevin, Michael K.; Burd, Christin E.; Liu, Jinze; Marzluff, William F.; Sharpless, Norman E.

    2013-01-01

    Circular RNAs composed of exonic sequence have been described in a small number of genes. Thought to result from splicing errors, circular RNA species possess no known function. To delineate the universe of endogenous circular RNAs, we performed high-throughput sequencing (RNA-seq) of libraries prepared from ribosome-depleted RNA with or without digestion with the RNA exonuclease, RNase R. We identified >25,000 distinct RNA species in human fibroblasts that contained non-colinear exons (a “backsplice”) and were reproducibly enriched by exonuclease degradation of linear RNA. These RNAs were validated as circular RNA (ecircRNA), rather than linear RNA, and were more stable than associated linear mRNAs in vivo. In some cases, the abundance of circular molecules exceeded that of associated linear mRNA by >10-fold. By conservative estimate, we identified ecircRNAs from 14.4% of actively transcribed genes in human fibroblasts. Application of this method to murine testis RNA identified 69 ecircRNAs in precisely orthologous locations to human circular RNAs. Of note, paralogous kinases HIPK2 and HIPK3 produce abundant ecircRNA from their second exon in both humans and mice. Though HIPK3 circular RNAs contain an AUG translation start, it and other ecircRNAs were not bound to ribosomes. Circular RNAs could be degraded by siRNAs and, therefore, may act as competing endogenous RNAs. Bioinformatic analysis revealed shared features of circularized exons, including long bordering introns that contained complementary ALU repeats. These data show that ecircRNAs are abundant, stable, conserved and nonrandom products of RNA splicing that could be involved in control of gene expression. PMID:23249747

  18. Synchronous-digitization for Video Rate Polarization Modulated Beam Scanning Second Harmonic Generation Microscopy.

    PubMed

    Sullivan, Shane Z; DeWalt, Emma L; Schmitt, Paul D; Muir, Ryan M; Simpson, Garth J

    2015-03-09

    Fast beam-scanning non-linear optical microscopy, coupled with fast (8 MHz) polarization modulation and analytical modeling have enabled simultaneous nonlinear optical Stokes ellipsometry (NOSE) and linear Stokes ellipsometry imaging at video rate (15 Hz). NOSE enables recovery of the complex-valued Jones tensor that describes the polarization-dependent observables, in contrast to polarimetry, in which the polarization stated of the exciting beam is recorded. Each data acquisition consists of 30 images (10 for each detector, with three detectors operating in parallel), each of which corresponds to polarization-dependent results. Processing of this image set by linear fitting contracts down each set of 10 images to a set of 5 parameters for each detector in second harmonic generation (SHG) and three parameters for the transmittance of the fundamental laser beam. Using these parameters, it is possible to recover the Jones tensor elements of the sample at video rate. Video rate imaging is enabled by performing synchronous digitization (SD), in which a PCIe digital oscilloscope card is synchronized to the laser (the laser is the master clock.) Fast polarization modulation was achieved by modulating an electro-optic modulator synchronously with the laser and digitizer, with a simple sine-wave at 1/10th the period of the laser, producing a repeating pattern of 10 polarization states. This approach was validated using Z-cut quartz, and NOSE microscopy was performed for micro-crystals of naproxen.

  19. Synchronous-digitization for video rate polarization modulated beam scanning second harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Sullivan, Shane Z.; DeWalt, Emma L.; Schmitt, Paul D.; Muir, Ryan D.; Simpson, Garth J.

    2015-03-01

    Fast beam-scanning non-linear optical microscopy, coupled with fast (8 MHz) polarization modulation and analytical modeling have enabled simultaneous nonlinear optical Stokes ellipsometry (NOSE) and linear Stokes ellipsometry imaging at video rate (15 Hz). NOSE enables recovery of the complex-valued Jones tensor that describes the polarization-dependent observables, in contrast to polarimetry, in which the polarization stated of the exciting beam is recorded. Each data acquisition consists of 30 images (10 for each detector, with three detectors operating in parallel), each of which corresponds to polarization-dependent results. Processing of this image set by linear fitting contracts down each set of 10 images to a set of 5 parameters for each detector in second harmonic generation (SHG) and three parameters for the transmittance of the fundamental laser beam. Using these parameters, it is possible to recover the Jones tensor elements of the sample at video rate. Video rate imaging is enabled by performing synchronous digitization (SD), in which a PCIe digital oscilloscope card is synchronized to the laser (the laser is the master clock.) Fast polarization modulation was achieved by modulating an electro-optic modulator synchronously with the laser and digitizer, with a simple sine-wave at 1/10th the period of the laser, producing a repeating pattern of 10 polarization states. This approach was validated using Z-cut quartz, and NOSE microscopy was performed for micro-crystals of naproxen.

  20. Population Structure, Diversity and Trait Association Analysis in Rice (Oryza sativa L.) Germplasm for Early Seedling Vigor (ESV) Using Trait Linked SSR Markers

    PubMed Central

    Anandan, Annamalai; Anumalla, Mahender; Pradhan, Sharat Kumar; Ali, Jauhar

    2016-01-01

    Early seedling vigor (ESV) is the essential trait for direct seeded rice to dominate and smother the weed growth. In this regard, 629 rice genotypes were studied for their morphological and physiological responses in the field under direct seeded aerobic situation on 14th, 28th and 56th days after sowing (DAS). It was determined that the early observations taken on 14th and 28th DAS were reliable estimators to study ESV as compared to56th DAS. Further, 96 were selected from 629 genotypes by principal component (PCA) and discriminate function analyses. The selected genotypes were subjected to decipher the pattern of genetic diversity in terms of both phenotypic and genotypic by using ESV QTL linked simple sequence repeat (SSR) markers. To assess the genetic structure, model and distance based approaches were used. Genotyping of 96 rice lines using 39 polymorphic SSRs produced a total of 128 alleles with the phenotypic information content (PIC) value of 0.24. The model based population structure approach grouped the accession into two distinct populations, whereas unrooted tree grouped the genotypes into three clusters. Both model based and structure based approach had clearly distinguished the early vigor genotypes from non-early vigor genotypes. Association analysis revealed that 16 and 10 SSRs showed significant association with ESV traits by general linear model (GLM) and mixed linear model (MLM) approaches respectively. Marker alleles on chromosome 2 were associated with shoot dry weight on 28 DAS, vigor index on 14 and 28 DAS. Improvement in the rate of seedling growth will be useful for identifying rice genotypes acquiescent to direct seeded conditions through marker-assisted selection. PMID:27031620

  1. Mouth opening in patients irradiated for head and neck cancer: a prospective repeated measures study.

    PubMed

    Kamstra, J I; Dijkstra, P U; van Leeuwen, M; Roodenburg, J L N; Langendijk, J A

    2015-05-01

    Aims of this prospective cohort study were (1) to analyze the course of mouth opening up to 48months post-radiotherapy (RT), (2) to assess risk factors predicting decrease in mouth opening, and (3) to develop a multivariable prediction model for change in mouth opening in a large sample of patients irradiated for head and neck cancer. Mouth opening was measured prior to RT (baseline) and at 6, 12, 18, 24, 36, and 48months post-RT. The primary outcome variable was mouth opening. Potential risk factors were entered into a linear mixed model analysis (manual backward-stepwise elimination) to create a multivariable prediction model. The interaction terms between time and risk factors that were significantly related to mouth opening were explored. The study population consisted of 641 patients: 70.4% male, mean age at baseline 62.3years (sd 12.5). Primary tumors were predominantly located in the oro- and nasopharynx (25.3%) and oral cavity (20.6%). Mean mouth opening at baseline was 38.7mm (sd 10.8). Six months post-RT, mean mouth opening was smallest, 36.7mm (sd 10.0). In the linear mixed model analysis, mouth opening was statistically predicted by the location of the tumor, natural logarithm of time post-RT in months (Ln (months)), gender, baseline mouth opening, and baseline age. All main effects interacted with Ln (months). The mean mouth opening decreased slightly over time. Mouth opening was predicted by tumor location, time, gender, baseline mouth opening, and age. The model can be used to predict mouth opening. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. A bifunctional amorphous polymer exhibiting equal linear and circular photoinduced birefringences.

    PubMed

    Royes, Jorge; Provenzano, Clementina; Pagliusi, Pasquale; Tejedor, Rosa M; Piñol, Milagros; Oriol, Luis

    2014-11-01

    The large and reversible photoinduced linear and circular birefringences in azo-compounds are at the basis of the interest in these materials, which are potentially useful for several applications. Since the onset of the linear and circular anisotropies relies on orientational processes, which typically occur on the molecular and supramolecular length scale, respectively, a circular birefringence at least one order of magnitude lower than the linear one is usually observed. Here, the synthesis and characterization of an amorphous polymer with a dimeric repeating unit containing a cyanoazobenzene and a cyanobiphenyl moiety are reported, in which identical optical linear and circular birefringences are induced for proper light dose and ellipticity. A pump-probe technique and an analytical method based on the Stokes-Mueller formalism are used to investigate the photoinduced effects and to evaluate the anisotropies. The peculiar photoresponse of the polymer makes it a good candidate for applications in smart functional devices. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Chemical synthesis of the tetrasaccharide repeating unit of the O-polysaccharide isolated from Azospirillum brasilense SR80.

    PubMed

    Sarkar, Vikramjit; Mukhopadhyay, Balaram

    2015-04-10

    A linear strategy has been developed for the synthesis of the tetrasaccharide repeating unit of the O-polysaccharide from Azospirillum brasilense SR80. Stepwise glycosylation of the rationally protected thioglycoside donors activated by NIS in the presence of La(OTf)3 furnished the target tetrasaccharide. The glycosylation reactions resulted in the formation of the desired linkage with absolute stereoselectivity and afforded the required derivatives in good to excellent yields. The phthalimido group has been used as the precursor of the desired acetamido group to meet the requirement of 1,2-trans glycosidic linkage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Surpassing the no-cloning limit with a heralded hybrid linear amplifier for coherent states

    PubMed Central

    Haw, Jing Yan; Zhao, Jie; Dias, Josephine; Assad, Syed M.; Bradshaw, Mark; Blandino, Rémi; Symul, Thomas; Ralph, Timothy C.; Lam, Ping Koy

    2016-01-01

    The no-cloning theorem states that an unknown quantum state cannot be cloned exactly and deterministically due to the linearity of quantum mechanics. Associated with this theorem is the quantitative no-cloning limit that sets an upper bound to the quality of the generated clones. However, this limit can be circumvented by abandoning determinism and using probabilistic methods. Here, we report an experimental demonstration of probabilistic cloning of arbitrary coherent states that clearly surpasses the no-cloning limit. Our scheme is based on a hybrid linear amplifier that combines an ideal deterministic linear amplifier with a heralded measurement-based noiseless amplifier. We demonstrate the production of up to five clones with the fidelity of each clone clearly exceeding the corresponding no-cloning limit. Moreover, since successful cloning events are heralded, our scheme has the potential to be adopted in quantum repeater, teleportation and computing applications. PMID:27782135

  5. Multilevel linear modelling of the response-contingent learning of young children with significant developmental delays.

    PubMed

    Raab, Melinda; Dunst, Carl J; Hamby, Deborah W

    2018-02-27

    The purpose of the study was to isolate the sources of variations in the rates of response-contingent learning among young children with multiple disabilities and significant developmental delays randomly assigned to contrasting types of early childhood intervention. Multilevel, hierarchical linear growth curve modelling was used to analyze four different measures of child response-contingent learning where repeated child learning measures were nested within individual children (Level-1), children were nested within practitioners (Level-2), and practitioners were nested within the contrasting types of intervention (Level-3). Findings showed that sources of variations in rates of child response-contingent learning were associated almost entirely with type of intervention after the variance associated with differences in practitioners nested within groups were accounted for. Rates of child learning were greater among children whose existing behaviour were used as the building blocks for promoting child competence (asset-based practices) compared to children for whom the focus of intervention was promoting child acquisition of missing skills (needs-based practices). The methods of analysis illustrate a practical approach to clustered data analysis and the presentation of results in ways that highlight sources of variations in the rates of response-contingent learning among young children with multiple developmental disabilities and significant developmental delays. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  6. Splash dispersal of Phyllosticta citricarpa conidia from infected citrus fruit

    PubMed Central

    Perryman, S. A. M.; Clark, S. J.; West, J. S.

    2014-01-01

    Rain-splash dispersal of Phyllosticta citricarpa (syn. Guignardia citricarpa) conidia (pycnidiospores) from infected oranges was studied in still air and combined with wind. High power microscopy demonstrated the presence of conidia in splash droplets from diseased oranges, which exuded conidia for over one hour during repeated wetting. The largest (5 mm) incident drops produced the highest splashes (up to 41.0 cm). A linear-by-quadratic surface model predicted highest splashes to be 41.91 cm at a horizontal distance of 25.97 cm from the target orange. Large splash droplets contained most conidia (4–5.5 mm splashes averaged 308 conidia), but were splashed <30 cm horizontal distance. Most (80–90%) splashes were <1 mm diameter but carried only 0–4 conidia per droplet. In multiple splash experiments, splashes combined to reach higher maxima (up to 61.7 cm; linear-by-quadratic surface model prediction, 62.1 cm) than in the single splash experiments. In combination with wind, higher wind speeds carried an increasing proportion of splashes downwind travelling horizontally at least 8 m at the highest wind speed tested (7 m/s), due to a small proportion of droplets (<1 mm) being aerosolised. These experiments suggest that P. citricarpa conidia can be dispersed from infected oranges by splashes of water in rainfall events. PMID:25298272

  7. Splash dispersal of Phyllosticta citricarpa conidia from infected citrus fruit.

    PubMed

    Perryman, S A M; Clark, S J; West, J S

    2014-10-09

    Rain-splash dispersal of Phyllosticta citricarpa (syn. Guignardia citricarpa) conidia (pycnidiospores) from infected oranges was studied in still air and combined with wind. High power microscopy demonstrated the presence of conidia in splash droplets from diseased oranges, which exuded conidia for over one hour during repeated wetting. The largest (5 mm) incident drops produced the highest splashes (up to 41.0 cm). A linear-by-quadratic surface model predicted highest splashes to be 41.91 cm at a horizontal distance of 25.97 cm from the target orange. Large splash droplets contained most conidia (4-5.5 mm splashes averaged 308 conidia), but were splashed <30 cm horizontal distance. Most (80-90%) splashes were <1 mm diameter but carried only 0-4 conidia per droplet. In multiple splash experiments, splashes combined to reach higher maxima (up to 61.7 cm; linear-by-quadratic surface model prediction, 62.1 cm) than in the single splash experiments. In combination with wind, higher wind speeds carried an increasing proportion of splashes downwind travelling horizontally at least 8 m at the highest wind speed tested (7 m/s), due to a small proportion of droplets (<1 mm) being aerosolised. These experiments suggest that P. citricarpa conidia can be dispersed from infected oranges by splashes of water in rainfall events.

  8. An equivalent unbalance identification method for the balancing of nonlinear squeeze-film damped rotordynamic systems

    NASA Astrophysics Data System (ADS)

    Torres Cedillo, Sergio G.; Bonello, Philip

    2016-01-01

    The high pressure (HP) rotor in an aero-engine assembly cannot be accessed under operational conditions because of the restricted space for instrumentation and high temperatures. This motivates the development of a non-invasive inverse problem approach for unbalance identification and balancing, requiring prior knowledge of the structure. Most such methods in the literature necessitate linear bearing models, making them unsuitable for aero-engine applications which use nonlinear squeeze-film damper (SFD) bearings. A previously proposed inverse method for nonlinear rotating systems was highly limited in its application (e.g. assumed circular centered SFD orbits). The methodology proposed in this paper overcomes such limitations. It uses the Receptance Harmonic Balance Method (RHBM) to generate the backward operator using measurements of the vibration at the engine casing, provided there is at least one linear connection between rotor and casing, apart from the nonlinear connections. A least-squares solution yields the equivalent unbalance distribution in prescribed planes of the rotor, which is consequently used to balance it. The method is validated on distinct rotordynamic systems using simulated casing vibration readings. The method is shown to provide effective balancing under hitherto unconsidered practical conditions. The repeatability of the method, as well as its robustness to noise, model uncertainty and balancing errors, are satisfactorily demonstrated and the limitations of the process discussed.

  9. Application of the balanced scorecard to an academic medical center in Taiwan: the effect of warning systems on improvement of hospital performance.

    PubMed

    Chen, Hsueh-Fen; Hou, Ying-Hui; Chang, Ray-E

    2012-10-01

    The balanced scorecard (BSC) is considered to be a useful tool for management in a variety of business environments. The purpose of this article is to utilize the experimental data produced by the incorporation and implementation of the BSC in hospitals and to investigate the effects of the BSC red light tracking warning system on performance improvement. This research was designed to be a retrospective follow-up study. The linear mixed model was applied for correcting the correlated errors. The data used in this study were secondary data collected by repeated measurements taken between 2004 and 2010 by 67 first-line medical departments of a public academic medical center in Taipei, Taiwan. The linear mixed model of analysis was applied for multilevel analysis. Improvements were observed with various time lags, from the subsequent month to three months after red light warning. During follow-up, the red light warning system more effectively improved controllable costs, infection rates, and the medical records completion rate. This further suggests that follow-up management promotes an enhancing and supportive effect to the red light warning. The red light follow-up management of BSC is an effective and efficient tool where improvement depends on ongoing and consistent attention in a continuing effort to better administer medical care and control costs. Copyright © 2012. Published by Elsevier B.V.

  10. A New Method for Non-destructive Measurement of Biomass, Growth Rates, Vertical Biomass Distribution and Dry Matter Content Based on Digital Image Analysis

    PubMed Central

    Tackenberg, Oliver

    2007-01-01

    Background and Aims Biomass is an important trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive. Thus, they do not allow the development of individual plants to be followed and they require many individuals to be cultivated for repeated measurements. Non-destructive methods do not have these limitations. Here, a non-destructive method based on digital image analysis is presented, addressing not only above-ground fresh biomass (FBM) and oven-dried biomass (DBM), but also vertical biomass distribution as well as dry matter content (DMC) and growth rates. Methods Scaled digital images of the plants silhouettes were taken for 582 individuals of 27 grass species (Poaceae). Above-ground biomass and DMC were measured using destructive methods. With image analysis software Zeiss KS 300, the projected area and the proportion of greenish pixels were calculated, and generalized linear models (GLMs) were developed with destructively measured parameters as dependent variables and parameters derived from image analysis as independent variables. A bootstrap analysis was performed to assess the number of individuals required for re-calibration of the models. Key Results The results of the developed models showed no systematic errors compared with traditionally measured values and explained most of their variance (R2 ≥ 0·85 for all models). The presented models can be directly applied to herbaceous grasses without further calibration. Applying the models to other growth forms might require a re-calibration which can be based on only 10–20 individuals for FBM or DMC and on 40–50 individuals for DBM. Conclusions The methods presented are time and cost effective compared with traditional methods, especially if development or growth rates are to be measured repeatedly. Hence, they offer an alternative way of determining biomass, especially as they are non-destructive and address not only FBM and DBM, but also vertical biomass distribution and DMC. PMID:17353204

  11. Effect of water-based recovery on blood lactate removal after high-intensity exercise.

    PubMed

    Lucertini, Francesco; Gervasi, Marco; D'Amen, Giancarlo; Sisti, Davide; Rocchi, Marco Bruno Luigi; Stocchi, Vilberto; Benelli, Piero

    2017-01-01

    This study assessed the effectiveness of water immersion to the shoulders in enhancing blood lactate removal during active and passive recovery after short-duration high-intensity exercise. Seventeen cyclists underwent active water- and land-based recoveries and passive water and land-based recoveries. The recovery conditions lasted 31 minutes each and started after the identification of each cyclist's blood lactate accumulation peak, induced by a 30-second all-out sprint on a cycle ergometer. Active recoveries were performed on a cycle ergometer at 70% of the oxygen consumption corresponding to the lactate threshold (the control for the intensity was oxygen consumption), while passive recoveries were performed with subjects at rest and seated on the cycle ergometer. Blood lactate concentration was measured 8 times during each recovery condition and lactate clearance was modeled over a negative exponential function using non-linear regression. Actual active recovery intensity was compared to the target intensity (one sample t-test) and passive recovery intensities were compared between environments (paired sample t-tests). Non-linear regression parameters (coefficients of the exponential decay of lactate; predicted resting lactates; predicted delta decreases in lactate) were compared between environments (linear mixed model analyses for repeated measures) separately for the active and passive recovery modes. Active recovery intensities did not differ significantly from the target oxygen consumption, whereas passive recovery resulted in a slightly lower oxygen consumption when performed while immersed in water rather than on land. The exponential decay of blood lactate was not significantly different in water- or land-based recoveries in either active or passive recovery conditions. In conclusion, water immersion at 29°C would not appear to be an effective practice for improving post-exercise lactate removal in either the active or passive recovery modes.

  12. Computed tomography of x-ray images using neural networks

    NASA Astrophysics Data System (ADS)

    Allred, Lloyd G.; Jones, Martin H.; Sheats, Matthew J.; Davis, Anthony W.

    2000-03-01

    Traditional CT reconstruction is done using the technique of Filtered Backprojection. While this technique is widely employed in industrial and medical applications, it is not generally understood that FB has a fundamental flaw. Gibbs phenomena states any Fourier reconstruction will produce errors in the vicinity of all discontinuities, and that the error will equal 28 percent of the discontinuity. A number of years back, one of the authors proposed a biological perception model whereby biological neural networks perceive 3D images from stereo vision. The perception model proports an internal hard-wired neural network which emulates the external physical process. A process is repeated whereby erroneous unknown internal values are used to generate an emulated signal with is compared to external sensed data, generating an error signal. Feedback from the error signal is then sued to update the erroneous internal values. The process is repeated until the error signal no longer decrease. It was soon realized that the same method could be used to obtain CT from x-rays without having to do Fourier transforms. Neural networks have the additional potential for handling non-linearities and missing data. The technique has been applied to some coral images, collected at the Los Alamos high-energy x-ray facility. The initial images show considerable promise, in some instances showing more detail than the FB images obtained from the same data. Although routine production using this new method would require a massively parallel computer, the method shows promise, especially where refined detail is required.

  13. [Convergent origin of repeats in genes coding for globular proteins. An analysis of the factors determining the presence of inverted and symmetrical repeats].

    PubMed

    Solov'ev, V V; Kel', A E; Kolchanov, N A

    1989-01-01

    The factors, determining the presence of inverted and symmetrical repeats in genes coding for globular proteins, have been analysed. An interesting property of genetical code has been revealed in the analysis of symmetrical repeats: the pairs of symmetrical codons corresponded to pairs of amino acids with mostly similar physical-chemical parameters. This property may explain the presence of symmetrical repeats and palindromes only in genes coding for beta-structural proteins-polypeptides, where amino acids with similar physical-chemical properties occupy symmetrical positions. A stochastic model of evolution of polynucleotide sequences has been used for analysis of inverted repeats. The modelling demonstrated that only limiting of sequences (uneven frequencies of used codons) is enough for arising of nonrandom inverted repeats in genes.

  14. [Standard sample preparation method for quick determination of trace elements in plastic].

    PubMed

    Yao, Wen-Qing; Zong, Rui-Long; Zhu, Yong-Fa

    2011-08-01

    Reference sample was prepared by masterbatch method, containing heavy metals with known concentration of electronic information products (plastic), the repeatability and precision were determined, and reference sample preparation procedures were established. X-Ray fluorescence spectroscopy (XRF) analysis method was used to determine the repeatability and uncertainty in the analysis of the sample of heavy metals and bromine element. The working curve and the metrical methods for the reference sample were carried out. The results showed that the use of the method in the 200-2000 mg x kg(-1) concentration range for Hg, Pb, Cr and Br elements, and in the 20-200 mg x kg(-1) range for Cd elements, exhibited a very good linear relationship, and the repeatability of analysis methods for six times is good. In testing the circuit board ICB288G and ICB288 from the Mitsubishi Heavy Industry Company, results agreed with the recommended values.

  15. Large-scale performance evaluation of Accu-Chek inform II point-of-care glucose meters.

    PubMed

    Jeong, Tae-Dong; Cho, Eun-Jung; Ko, Dae-Hyun; Lee, Woochang; Chun, Sail; Hong, Ki-Sook; Min, Won-Ki

    2016-12-01

    The aim of this study was to report the experience of large-scale performance evaluation of 238 Accu-Chek Inform II point-of-care (POC) glucose meters in a single medical setting. The repeatability of 238 POC devices, the within-site imprecision of 12 devices, and the linearity of 49 devices were evaluated using glucose control solutions. The glucose results of 24 POC devices and central laboratory were compared using patient samples. Mean concentration of control solutions was 2.39 mmol/L for Level 1 and 16.52 mmol/L for Level 2. The pooled repeatability coefficient of variation (CV) of the 238 devices was 2.0% for Level 1 and 1.6% for Level 2. The pooled within-site imprecision CV and reproducibility CV of the 12 devices were 2.7% and 2.7% for Level 1, and 1.9%, and 1.9% for Level 2, respectively. The test results of all 49 devices were linear within analytical measurement range from 1.55-31.02 mmol/L. The correlation coefficient for individual POC devices ranged from 0.9967-0.9985. The total correlation coefficient for the 24 devices was 0.998. The Accu-Chek Inform II POC blood glucose meters performed well in terms of precision, linearity, and correlation evaluations. Consensus guidelines for the large-scale performance evaluations of POC devices are required.

  16. The mitochondrial genome of Hydra oligactis (Cnidaria, Hydrozoa) sheds new light on animal mtDNA evolution and cnidarian phylogeny.

    PubMed

    Kayal, Ehsan; Lavrov, Dennis V

    2008-02-29

    The 16,314-nuceotide sequence of the linear mitochondrial DNA (mtDNA) molecule of Hydra oligactis (Cnidaria, Hydrozoa)--the first from the class Hydrozoa--has been determined. This sequence contains genes for 13 energy pathway proteins, small and large subunit rRNAs, and methionine and tryptophan tRNAs, as is typical for cnidarians. All genes have the same transcriptional orientation and their arrangement in the genome is similar to that of the jellyfish Aurelia aurita. In addition, a partial copy of cox1 is present at one end of the molecule in a transcriptional orientation opposite to the rest of the genes, forming a part of inverted terminal repeat characteristic of linear mtDNA and linear mitochondrial plasmids. The sequence close to at least one end of the molecule contains several homonucleotide runs as well as small inverted repeats that are able to form strong secondary structures and may be involved in mtDNA maintenance and expression. Phylogenetic analysis of mitochondrial genes of H. oligactis and other cnidarians supports the Medusozoa hypothesis but also suggests that Anthozoa may be paraphyletic, with octocorallians more closely related to the Medusozoa than to the Hexacorallia. The latter inference implies that Anthozoa is paraphyletic and that the polyp (rather than a medusa) is the ancestral body type in Cnidaria.

  17. Predicting future protection of respirator users: Statistical approaches and practical implications.

    PubMed

    Hu, Chengcheng; Harber, Philip; Su, Jing

    2016-01-01

    The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.

  18. Reliability and criterion-related validity of a new repeated agility test

    PubMed Central

    Makni, E; Jemni, M; Elloumi, M; Chamari, K; Nabli, MA; Padulo, J; Moalla, W

    2016-01-01

    The study aimed to assess the reliability and the criterion-related validity of a new repeated sprint T-test (RSTT) that includes intense multidirectional intermittent efforts. The RSTT consisted of 7 maximal repeated executions of the agility T-test with 25 s of passive recovery rest in between. Forty-five team sports players performed two RSTTs separated by 3 days to assess the reliability of best time (BT) and total time (TT) of the RSTT. The intra-class correlation coefficient analysis revealed a high relative reliability between test and retest for BT and TT (>0.90). The standard error of measurement (<0.50) showed that the RSTT has a good absolute reliability. The minimal detectable change values for BT and TT related to the RSTT were 0.09 s and 0.58 s, respectively. To check the criterion-related validity of the RSTT, players performed a repeated linear sprint (RLS) and a repeated sprint with changes of direction (RSCD). Significant correlations between the BT and TT of the RLS, RSCD and RSTT were observed (p<0.001). The RSTT is, therefore, a reliable and valid measure of the intermittent repeated sprint agility performance. As this ability is required in all team sports, it is suggested that team sports coaches, fitness coaches and sports scientists consider this test in their training follow-up. PMID:27274109

  19. Performance characteristics of a low-cost, field-deployable miniature CCD spectrometer

    PubMed Central

    Coles, Simon; Nimmo, Malcolm; Worsfold, Paul J.

    2000-01-01

    Miniature spectrometers incorporating array detectors are becoming a viable, low-cost option for field and process deployments. The performance characteristics of one such instrument are reported and compared with those of a conventional benchtop instrument. The parameters investigated were wavelength repeatability, photometric linearity, instrumental noise (photometric precision) and instrumental drift. PMID:18924863

  20. Programmable Quantum Photonic Processor Using Silicon Photonics

    DTIC Science & Technology

    2017-04-01

    quantum information processing and quantum sensing, ranging from linear optics quantum computing and quantum simulation to quantum ...transformers have driven experimental and theoretical advances in quantum simulation, cluster-state quantum computing , all-optical quantum repeaters...neuromorphic computing , and other applications. In addition, we developed new schemes for ballistic quantum computation , new methods for

  1. The genome and variation of Bacillus anthracis

    PubMed Central

    Keim, Paul; Gruendike, Jeffrey M.; Klevytska, Alexandra M.; Schupp, James M.; Challacombe, Jean; Okinaka, Richard

    2009-01-01

    The Bacillus anthracis genome reflects its close genetic ties to B. cereus and B. thuringiensis but has been shaped by its own unique biology and evolutionary forces. The genome is comprised of a chromosome and two large virulence plasmids, pXO1 and pXO2. The chromosome is mostly co-linear among B. anthracis strains and even with the closest near neighbor strains. An exception to this pattern has been observed in a large inversion in an attenuated strain suggesting that chromosome co-linearity is important to the natural biology of this pathogen. In general, there are few polymorphic nucleotides among B. anthracis strains reflecting the short evolutionary time since its derivation from a B. cereus-like ancestor. The exceptions to this lack of diversity are the variable number tandem repeat (VNTR) loci that exist in genic and non genic regions of the chromosome and both plasmids. Their variation is associated with high mutability that is driven by rapid insertion and deletion of the repeats within an array. A notable example is found in the vrrC locus which is homologous to known DNA translocase genes from other bacteria. PMID:19729033

  2. CAG repeat lengths ≥335 attenuate the phenotype in the R6/2 Huntington’s disease transgenic mouse

    PubMed Central

    Dragatsis, I.; Goldowitz, D.; Del Mar, N.; Deng, Y.P.; Meade, C.A.; Liu, Li; Sun, Z.; Dietrich, P.; Yue, J.; Reiner, A.

    2015-01-01

    With spontaneous elongation of the CAG repeat in the R6/2 transgene to ≥335, resulting in a transgene protein too large for passive entry into nuclei via the nuclear pore, we observed an abrupt increase in lifespan to >20 weeks, compared to the 12 weeks common in R6/2 mice with 150 repeats. In the ≥335 CAG mice, large ubiquitinated aggregates of mutant protein were common in neuronal dendrites and perikaryal cytoplasm, but intranuclear aggregates were small and infrequent. Message and protein for the ≥335 CAG transgene were reduced to one-third that in 150 CAG R6/2 mice. Neurological and neurochemical abnormalities were delayed in onset and less severe than in 150 CAG R6/2 mice. These findings suggest that polyQ length and pathogenicity in Huntington’s disease may not be linearly related, and pathogenicity may be less severe with extreme repeats. Both diminished mutant protein and reduced nuclear entry may contribute to phenotype attenuation. PMID:19027857

  3. Incorporating Auditory Models in Speech/Audio Applications

    NASA Astrophysics Data System (ADS)

    Krishnamoorthi, Harish

    2011-12-01

    Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

  4. Influence of androgen receptor CAG polymorphism on sexual function recovery after testosterone therapy in late-onset hypogonadism.

    PubMed

    Tirabassi, Giacomo; Corona, Giovanni; Biagioli, Andrea; Buldreghini, Eddi; delli Muti, Nicola; Maggi, Mario; Balercia, Giancarlo

    2015-02-01

    Androgen receptor (AR) CAG polymorphism has been found to influence sexual function. However, no study has evaluated its potential to condition sexual function recovery after testosterone replacement therapy (TRT) in a large cohort of hypogonadic subjects. To evaluate the role of this polymorphism in sexual function improvement after TRT in late-onset hypogonadism (LOH). Seventy-three men affected by LOH were retrospectively considered. Evaluations were performed before TRT started (time 0) and before the sixth undecanoate testosterone injection. International Index of Erectile Function (IIEF) questionnaire (erectile function [EF], orgasmic function [OF], sexual desire [SD], intercourse satisfaction [IS], overall satisfaction [OS], and total IIEF-15 score); total and free testosterone and estradiol; AR gene CAG repeat number. TRT induced a significant increase in total and free testosterone and estradiol. All IIEF domains significantly improved after TRT. AR CAG repeats negatively and significantly correlated with all the variations (Δ-) of sexual function domains, except for Δ-OS. Conversely, Δ-total testosterone was found to be positively and significantly correlated with sexual function domain variations, except for Δ-IS and Δ-OS. Δ-estradiol did not correlate significantly with any of the variations of sexual function domains. After inclusion in generalized linear models, the number of AR gene CAG triplets was found to be independently and negatively associated with Δ-EF, Δ-SD, Δ-IS, and Δ-Total IIEF-15 score, whereas Δ-total testosterone was independently and positively associated with Δ-EF, Δ-OF, Δ-SD, and Δ-Total IIEF-15 score. However, after including time 0 total testosterone in the model, AR gene CAG triplets remained independently and negatively associated only with Δ-EF and Δ-Total IIEF-15 score, whereas Δ-total testosterone was independently and positively associated only with Δ-EF. Longer length of AR gene CAG repeat tract seems to lower TRT-induced improvement of sexual function in LOH. © 2014 International Society for Sexual Medicine.

  5. Effect of repeated simulated clinical use and sterilization on the cutting efficiency and flexibility of Hyflex CM nickel-titanium rotary files.

    PubMed

    Seago, Scott T; Bergeron, Brian E; Kirkpatrick, Timothy C; Roberts, Mark D; Roberts, Howard W; Himel, Van T; Sabey, Kent A

    2015-05-01

    Recent nickel-titanium manufacturing processes have resulted in an alloy that remains in a twinned martensitic phase at operating temperature. This alloy has been shown to have increased flexibility with added tolerance to cyclic and torsional fatigue. The aim of this study was to assess the effect of repeated simulated clinical use and sterilization on cutting efficiency and flexibility of Hyflex CM rotary files. Cutting efficiency was determined by measuring the load required to maintain a constant feed rate while instrumenting simulated canals. Flexibility was determined by using a 3-point bending test. Files were autoclaved after each use according to the manufacturer's recommendations. Files were tested through 10 simulated clinical uses. For cutting efficiency, mean data were analyzed by using multiple factor analysis of variance and the Dunnett post hoc test (P < .05). For flexibility, mean data were analyzed by using Levene's Test of Equality of Error and a general linear model (P < .05). No statistically significant decrease in cutting efficiency was noted in groups 2, 5, 6, and 7. A statistically significant decrease in cutting efficiency was noted in groups 3, 4, 8, 9, and 10. No statistically significant decrease in flexibility was noted in groups 2, 3, and 7. A statistically significant decrease in flexibility was noted in groups 4, 5, 6, 8, 9, 10, and 11. Repeated simulated clinical use and sterilization showed no effect on cutting efficiency through 1 use and no effect on flexibility through 2 uses. Published by Elsevier Inc.

  6. Extortion under uncertainty: Zero-determinant strategies in noisy games

    NASA Astrophysics Data System (ADS)

    Hao, Dong; Rong, Zhihai; Zhou, Tao

    2015-05-01

    Repeated game theory has been one of the most prevailing tools for understanding long-running relationships, which are the foundation in building human society. Recent works have revealed a new set of "zero-determinant" (ZD) strategies, which is an important advance in repeated games. A ZD strategy player can exert unilateral control on two players' payoffs. In particular, he can deterministically set the opponent's payoff or enforce an unfair linear relationship between the players' payoffs, thereby always seizing an advantageous share of payoffs. One of the limitations of the original ZD strategy, however, is that it does not capture the notion of robustness when the game is subjected to stochastic errors. In this paper, we propose a general model of ZD strategies for noisy repeated games and find that ZD strategies have high robustness against errors. We further derive the pinning strategy under noise, by which the ZD strategy player coercively sets the opponent's expected payoff to his desired level, although his payoff control ability declines with the increase of noise strength. Due to the uncertainty caused by noise, the ZD strategy player cannot ensure his payoff to be permanently higher than the opponent's, which implies dominant extortions do not exist even under low noise. While we show that the ZD strategy player can still establish a novel kind of extortions, named contingent extortions, where any increase of his own payoff always exceeds that of the opponent's by a fixed percentage, and the conditions under which the contingent extortions can be realized are more stringent as the noise becomes stronger.

  7. Predictors of variation in serum IGF1 and IGFBP3 levels in healthy African American and white men.

    PubMed

    Hoyo, Cathrine; Grubber, Janet; Demark-Wahnefried, Wendy; Lobaugh, Bruce; Jeffreys, Amy S; Grambow, Steven C; Marks, Jeffrey R; Keku, Temitope O; Walther, Phillip J; Schildkraut, Joellen M

    2009-07-01

    Individual variation in circulating insulinlike growth factor-1 (IGF1) and its major binding protein, insulinlike growth factor binding protein-3 (IGFBP3), have been etiologically linked to several chronic diseases, including some cancers. Factors associated with variation in circulating levels of these peptide hormones remain unclear. Multiple linear regression models were used to determine the extent to which sociodemographic characteristics, lifestyle factors, personal and family history of chronic disease, and common genetic variants, the (CA)n repeat polymorphism in the IGF1 promoter and the IGFBP3-202 A/C polymorphism (rs2854744) predict variation in IGF1 or IGFBP3 serum levels in 33 otherwise healthy African American and 37 white males recruited from Durham Veterans Administration Medical Center. Predictors of serum IGF1, IGFBP3, and the IGF1:IGFBP3 molar ratio varied by race. In African Americans, 17% and 28% of the variation in serum IGF1 and the IGF1:IGFBP3 molar ratio, were explained by cigarette smoking and carrying the IGF1 (CA)19 repeat allele, respectively. Not carrying at least 1 IGF1 (CA)19 repeat allele and a high body mass index explained 8% and 14%, respectively, of the variation IGFBP3 levels. These factors did not predict variation of these peptides in whites. If successfully replicated in larger studies, these findings would add to recent evidence, suggesting known genetic and lifestyle chronic disease risk factors influence IGF1 and IGFBP3 circulating levels differently in African Americans and whites.

  8. Predictors of variation in serum IGFI and IGFBP3 levels in healthy African-American and white men

    PubMed Central

    Grubber, Janet; Demark-Wahnefried, Wendy; Lobaugh, Bruce; Jeffreys, Amy S.; Grambow, Steven C.; Marks, Jeffrey R.; Keku, Temitope O.; Walther, Phillip J.; Schildkraut, Joellen M.

    2010-01-01

    Background Individual variation in circulating insulin-like growth factor-I (IGF1) and its major binding protein, insulin-like growth factor binding protein-3 (IGFBP3) have been etiologically linked to several chronic diseases, including some cancers. Factors associated with variation in circulating levels of these peptide hormones remain unclear. Methods Multiple linear regression models were used to determine the extent to which socio-demographic characteristics, lifestyle factors, personal and family history of chronic disease, and common genetic variants, the (CA)n repeat polymorphism in the IGF1 promoter and the IGFBP3 -202 A/C polymorphism (rs2854744) predict variation in IGF1 or IGFBP3 serum levels in 33 otherwise healthy African American and 37 white males recruited from Durham Veterans Administration Medical Center. Results Predictors of serum IGF1, IGFBP3 and the IGF1:IGFBP3 molar ratio varied by race. In African Americans, 17% and 28% of the variation in serum IGF1 and the IGF1:IGFBP3 molar ratio, respectively, was explained by cigarette smoking and carrying the IGF1 (CA)19 repeat allele, respectively. Not carrying at least one IGF1 (CA)19 repeat allele and a high BMI explained 8% and 14%, respectively, of the variation IGFBP3 levels. These factors did not predict variation of these peptides in whites. Conclusion If successfully replicated in larger studies, these findings add to recent evidence suggesting known genetic and lifestyle chronic disease risk factors influence IGF1 and IGFBP3 circulating levels differently in African Americans and whites. PMID:19634593

  9. Identification of infusion strategy for achieving repeatable nanoparticle distribution and quantification of thermal dosage using micro-CT Hounsfield unit in magnetic nanoparticle hyperthermia.

    PubMed

    LeBrun, Alexander; Joglekar, Tejashree; Bieberich, Charles; Ma, Ronghui; Zhu, Liang

    2016-01-01

    The objective of this study was to identify an injection strategy leading to repeatable nanoparticle deposition patterns in tumours and to quantify volumetric heat generation rate distribution based on micro-CT Hounsfield unit (HU) in magnetic nanoparticle hyperthermia. In vivo animal experiments were performed on graft prostatic cancer (PC3) tumours in immunodeficient mice to investigate whether lowering ferrofluid infusion rate improves control of the distribution of magnetic nanoparticles in tumour tissue. Nanoparticle distribution volume obtained from micro-CT scan was used to evaluate spreading of the nanoparticles from the injection site in tumours. Heating experiments were performed to quantify relationships among micro-CT HU values, local nanoparticle concentrations in the tumours, and the ferrofluid-induced volumetric heat generation rate (q(MNH)) when nanoparticles were subject to an alternating magnetic field. An infusion rate of 3 µL/min was identified to result in the most repeatable nanoparticle distribution in PC3 tumours. Linear relationships have been obtained to first convert micro-CT greyscale values to HU values, then to local nanoparticle concentrations, and finally to nanoparticle-induced q(MNH) values. The total energy deposition rate in tumours was calculated and the observed similarity in total energy deposition rates in all three infusion rate groups suggests improvement in minimising nanoparticle leakage from the tumours. The results of this study demonstrate that micro-CT generated q(MNH) distribution and tumour physical models improve predicting capability of heat transfer simulation for designing reliable treatment protocols using magnetic nanoparticle hyperthermia.

  10. Investigation of scale effects in the TRF determined by VLBI

    NASA Astrophysics Data System (ADS)

    Wahl, Daniel; Heinkelmann, Robert; Schuh, Harald

    2017-04-01

    The improvement of the International Terrestrial Reference Frame (ITRF) is of great significance for Earth sciences and one of the major tasks in geodesy. The translation, rotation and the scale-factor, as well as their linear rates, are solved in a 14-parameter transformation between individual frames of each space geodetic technique and the combined frame. In ITRF2008, as well as in the current release ITRF2014, the scale-factor is provided by Very Long Baseline Interferometry (VLBI) and Satellite Laser Ranging (SLR) in equal shares. Since VLBI measures extremely precise group delays that are transformed to baseline lengths by the velocity of light, a natural constant, VLBI is the most suitable method for providing the scale. The aim of the current work is to identify possible shortcomings in the VLBI scale contribution to ITRF2008. For developing recommendations for an enhanced estimation, scale effects in the Terrestrial Reference Frame (TRF) determined with VLBI are considered in detail and compared to ITRF2008. In contrast to station coordinates, where the scale is defined by a geocentric position vector, pointing from the origin of the reference frame to the station, baselines are not related to the origin. They are describing the absolute scale independently from the datum. The more accurate a baseline length, and consequently the scale, is estimated by VLBI, the better the scale contribution to the ITRF. Considering time series of baseline length between different stations, a non-linear periodic signal can clearly be recognized, caused by seasonal effects at observation sites. Modeling these seasonal effects and subtracting them from the original data enhances the repeatability of single baselines significantly. Other effects influencing the scale strongly, are jumps in the time series of baseline length, mainly evoked by major earthquakes. Co- and post-seismic effects can be identified in the data, having a non-linear character likewise. Modeling the non-linear motion or completely excluding affected stations is another important step for an improved scale determination. In addition to the investigation of single baseline repeatabilities also the spatial transformation, which is performed for determining parameters of the ITRF2008, are considered. Since the reliability of the resulting transformation parameters is higher the more identical points are used in the transformation, an approach where all possible stations are used as control points is comprehensible. Experiments that examine the scale-factor and its spatial behavior between control points in ITRF2008 and coordinates determined by VLBI only showed that the network geometry has a large influence on the outcome as well. Introducing an unequally distributed network for the datum configuration, the correlations between translation parameters and the scale-factor can become remarkably high. Only a homogeneous spatial distribution of participating stations yields a maximally uncorrelated scale-factor that can be interpreted independent from other parameters. In the current release of the ITRF, the ITRF2014, for the first time, non-linear effects in the time series of station coordinates are taken into account. The present work shows the importance and the right direction of the modification of the ITRF calculation. But also further improvements were found which lead to an enhanced scale determination.

  11. Analysis of a kinetic multi-segment foot model. Part I: Model repeatability and kinematic validity.

    PubMed

    Bruening, Dustin A; Cooney, Kevin M; Buczek, Frank L

    2012-04-01

    Kinematic multi-segment foot models are still evolving, but have seen increased use in clinical and research settings. The addition of kinetics may increase knowledge of foot and ankle function as well as influence multi-segment foot model evolution; however, previous kinetic models are too complex for clinical use. In this study we present a three-segment kinetic foot model and thorough evaluation of model performance during normal gait. In this first of two companion papers, model reference frames and joint centers are analyzed for repeatability, joint translations are measured, segment rigidity characterized, and sample joint angles presented. Within-tester and between-tester repeatability were first assessed using 10 healthy pediatric participants, while kinematic parameters were subsequently measured on 17 additional healthy pediatric participants. Repeatability errors were generally low for all sagittal plane measures as well as transverse plane Hindfoot and Forefoot segments (median<3°), while the least repeatable orientations were the Hindfoot coronal plane and Hallux transverse plane. Joint translations were generally less than 2mm in any one direction, while segment rigidity analysis suggested rigid body behavior for the Shank and Hindfoot, with the Forefoot violating the rigid body assumptions in terminal stance/pre-swing. Joint excursions were consistent with previously published studies. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Genetic Contributors to Intergenerational CAG Repeat Instability in Huntington’s Disease Knock-In Mice

    PubMed Central

    Neto, João Luís; Lee, Jong-Min; Afridi, Ali; Gillis, Tammy; Guide, Jolene R.; Dempsey, Stephani; Lager, Brenda; Alonso, Isabel; Wheeler, Vanessa C.; Pinto, Ricardo Mouro

    2017-01-01

    Huntington’s disease (HD) is a neurodegenerative disorder caused by the expansion of a CAG trinucleotide repeat in exon 1 of the HTT gene. Longer repeat sizes are associated with increased disease penetrance and earlier ages of onset. Intergenerationally unstable transmissions are common in HD families, partly underlying the genetic anticipation seen in this disorder. HD CAG knock-in mouse models also exhibit a propensity for intergenerational repeat size changes. In this work, we examine intergenerational instability of the CAG repeat in over 20,000 transmissions in the largest HD knock-in mouse model breeding datasets reported to date. We confirmed previous observations that parental sex drives the relative ratio of expansions and contractions. The large datasets further allowed us to distinguish effects of paternal CAG repeat length on the magnitude and frequency of expansions and contractions, as well as the identification of large repeat size jumps in the knock-in models. Distinct degrees of intergenerational instability were observed between knock-in mice of six background strains, indicating the occurrence of trans-acting genetic modifiers. We also found that lines harboring a neomycin resistance cassette upstream of Htt showed reduced expansion frequency, indicative of a contributing role for sequences in cis, with the expanded repeat as modifiers of intergenerational instability. These results provide a basis for further understanding of the mechanisms underlying intergenerational repeat instability. PMID:27913616

  13. Genetic Contributors to Intergenerational CAG Repeat Instability in Huntington's Disease Knock-In Mice.

    PubMed

    Neto, João Luís; Lee, Jong-Min; Afridi, Ali; Gillis, Tammy; Guide, Jolene R; Dempsey, Stephani; Lager, Brenda; Alonso, Isabel; Wheeler, Vanessa C; Pinto, Ricardo Mouro

    2017-02-01

    Huntington's disease (HD) is a neurodegenerative disorder caused by the expansion of a CAG trinucleotide repeat in exon 1 of the HTT gene. Longer repeat sizes are associated with increased disease penetrance and earlier ages of onset. Intergenerationally unstable transmissions are common in HD families, partly underlying the genetic anticipation seen in this disorder. HD CAG knock-in mouse models also exhibit a propensity for intergenerational repeat size changes. In this work, we examine intergenerational instability of the CAG repeat in over 20,000 transmissions in the largest HD knock-in mouse model breeding datasets reported to date. We confirmed previous observations that parental sex drives the relative ratio of expansions and contractions. The large datasets further allowed us to distinguish effects of paternal CAG repeat length on the magnitude and frequency of expansions and contractions, as well as the identification of large repeat size jumps in the knock-in models. Distinct degrees of intergenerational instability were observed between knock-in mice of six background strains, indicating the occurrence of trans-acting genetic modifiers. We also found that lines harboring a neomycin resistance cassette upstream of Htt showed reduced expansion frequency, indicative of a contributing role for sequences in cis, with the expanded repeat as modifiers of intergenerational instability. These results provide a basis for further understanding of the mechanisms underlying intergenerational repeat instability. Copyright © 2017 by the Genetics Society of America.

  14. Perception of olive oils sensory defects using a potentiometric taste device.

    PubMed

    Veloso, Ana C A; Silva, Lucas M; Rodrigues, Nuno; Rebello, Ligia P G; Dias, Luís G; Pereira, José A; Peres, António M

    2018-01-01

    The capability of perceiving olive oils sensory defects and intensities plays a key role on olive oils quality grade classification since olive oils can only be classified as extra-virgin if no defect can be perceived by a human trained sensory panel. Otherwise, olive oils may be classified as virgin or lampante depending on the median intensity of the defect predominantly perceived and on the physicochemical levels. However, sensory analysis is time-consuming and requires an official sensory panel, which can only evaluate a low number of samples per day. In this work, the potential use of an electronic tongue as a taste sensor device to identify the defect predominantly perceived in olive oils was evaluated. The potentiometric profiles recorded showed that intra- and inter-day signal drifts could be neglected (i.e., relative standard deviations lower than 25%), being not statistically significant the effect of the analysis day on the overall recorded E-tongue sensor fingerprints (P-value = 0.5715, for multivariate analysis of variance using Pillai's trace test), which significantly differ according to the olive oils' sensory defect (P-value = 0.0084, for multivariate analysis of variance using Pillai's trace test). Thus, a linear discriminant model based on 19 potentiometric signal sensors, selected by the simulated annealing algorithm, could be established to correctly predict the olive oil main sensory defect (fusty, rancid, wet-wood or winey-vinegary) with average sensitivity of 75 ± 3% and specificity of 73 ± 4% (repeated K-fold cross-validation variant: 4 folds×10 repeats). Similarly, a linear discriminant model, based on 24 selected sensors, correctly classified 92 ± 3% of the olive oils as virgin or lampante, being an average specificity of 93 ± 3% achieved. The overall satisfactory predictive performances strengthen the feasibility of the developed taste sensor device as a complementary methodology for olive oils' defects analysis and subsequent quality grade classification. Furthermore, the capability of identifying the type of sensory defect of an olive oil may allow establishing helpful insights regarding bad practices of olives or olive oils production, harvesting, transport and storage. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Leaf-on canopy closure in broadleaf deciduous forests predicted during winter

    USGS Publications Warehouse

    Twedt, Daniel J.; Ayala, Andrea J.; Shickel, Madeline R.

    2015-01-01

    Forest canopy influences light transmittance, which in turn affects tree regeneration and survival, thereby having an impact on forest composition and habitat conditions for wildlife. Because leaf area is the primary impediment to light penetration, quantitative estimates of canopy closure are normally made during summer. Studies of forest structure and wildlife habitat that occur during winter, when deciduous trees have shed their leaves, may inaccurately estimate canopy closure. We estimated percent canopy closure during both summer (leaf-on) and winter (leaf-off) in broadleaf deciduous forests in Mississippi and Louisiana using gap light analysis of hemispherical photographs that were obtained during repeat visits to the same locations within bottomland and mesic upland hardwood forests and hardwood plantation forests. We used mixed-model linear regression to predict leaf-on canopy closure from measurements of leaf-off canopy closure, basal area, stem density, and tree height. Competing predictive models all included leaf-off canopy closure (relative importance = 0.93), whereas basal area and stem density, more traditional predictors of canopy closure, had relative model importance of ≤ 0.51.

  16. Analysis of laser energy characteristics of laser guided weapons based on the hardware-in-the-loop simulation system

    NASA Astrophysics Data System (ADS)

    Zhu, Yawen; Cui, Xiaohong; Wang, Qianqian; Tong, Qiujie; Cui, Xutai; Li, Chenyu; Zhang, Le; Peng, Zhong

    2016-11-01

    The hardware-in-the-loop simulation system, which provides a precise, controllable and repeatable test conditions, is an important part of the development of the semi-active laser (SAL) guided weapons. In this paper, laser energy chain characteristics were studied, which provides a theoretical foundation for the SAL guidance technology and the hardware-in-the-loop simulation system. Firstly, a simplified equation was proposed to adjust the radar equation according to the principles of the hardware-in-the-loop simulation system. Secondly, a theoretical model and calculation method were given about the energy chain characteristics based on the hardware-in-the-loop simulation system. We then studied the reflection characteristics of target and the distance between the missile and target with major factors such as the weather factors. Finally, the accuracy of modeling was verified by experiment as the values measured experimentally generally follow the theoretical results from the model. And experimental results revealed that ratio of attenuation of the laser energy exhibited a non-linear change vs. pulse number, which were in accord with the actual condition.

  17. Effectiveness Trial of Community-Based I Choose Life-Africa Human Immunodeficiency Virus Prevention Program in Kenya

    PubMed Central

    Adam, Mary B.

    2014-01-01

    We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. PMID:24957544

  18. Effectiveness trial of community-based I Choose Life-Africa human immunodeficiency virus prevention program in Kenya.

    PubMed

    Adam, Mary B

    2014-09-01

    We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. © The American Society of Tropical Medicine and Hygiene.

  19. Evaluation and prediction of solar radiation for energy management based on neural networks

    NASA Astrophysics Data System (ADS)

    Aldoshina, O. V.; Van Tai, Dinh

    2017-08-01

    Currently, there is a high rate of distribution of renewable energy sources and distributed power generation based on intelligent networks; therefore, meteorological forecasts are particularly useful for planning and managing the energy system in order to increase its overall efficiency and productivity. The application of artificial neural networks (ANN) in the field of photovoltaic energy is presented in this article. Implemented in this study, two periodically repeating dynamic ANS, that are the concentration of the time delay of a neural network (CTDNN) and the non-linear autoregression of a network with exogenous inputs of the NAEI, are used in the development of a model for estimating and daily forecasting of solar radiation. ANN show good productivity, as reliable and accurate models of daily solar radiation are obtained. This allows to successfully predict the photovoltaic output power for this installation. The potential of the proposed method for controlling the energy of the electrical network is shown using the example of the application of the NAEI network for predicting the electric load.

  20. How to Study Thermal Applications of Open-Cell Metal Foam: Experiments and Computational Fluid Dynamics

    PubMed Central

    De Schampheleire, Sven; De Jaeger, Peter; De Kerpel, Kathleen; Ameel, Bernd; Huisseune, Henk; De Paepe, Michel

    2016-01-01

    This paper reviews the available methods to study thermal applications with open-cell metal foam. Both experimental and numerical work are discussed. For experimental research, the focus of this review is on the repeatability of the results. This is a major concern, as most studies only report the dependence of thermal properties on porosity and a number of pores per linear inch (PPI-value). A different approach, which is studied in this paper, is to characterize the foam using micro tomography scans with small voxel sizes. The results of these scans are compared to correlations from the open literature. Large differences are observed. For the numerical work, the focus is on studies using computational fluid dynamics. A novel way of determining the closure terms is proposed in this work. This is done through a numerical foam model based on micro tomography scan data. With this foam model, the closure terms are determined numerically. PMID:28787894

  1. Relationship Between Affect Consciousness and Personality Functioning in Patients With Personality Disorders: A Prospective Study.

    PubMed

    Johansen, Merete Selsbakk; Normann-Eide, Eivind; Normann-Eide, Tone; Klungs Yr, Ole; Kvarstein, Elfrida; Wilberg, Theresa

    2016-10-01

    Emotional dysfunction is by definition central to personality disorders (PDs). In the alternative model in DSM-5, self and relational dysfunctioning constitutes the core of PD, but little is known about the relation between emotional functioning and such core aspects of personality functioning. This study investigated concurrent and prospective associations between emotional and personality functioning as assessed by affect consciousness (AC) and the Severity Indices of Personality Problems (SIPP-118), respectively. The SIPP-118 comprises five domains of personality functioning, including Identity Integration and Relation Capacities, and was applied repeatedly during 3-year follow-up of 63 PD patients who participated in a treatment study. Statistical analyses were based on linear mixed models. Lower AC levels were significantly associated with (a) lower levels of Identity Integration and Relational Capacities at baseline, and (b) poorer long-term improvement of Identity Integration. The study supports the notion that affect consciousness is related to core aspects of personality functioning.

  2. Dextrose 10% in the treatment of out-of-hospital hypoglycemia.

    PubMed

    Kiefer, Matthew V; Gene Hern, H; Alter, Harrison J; Barger, Joseph B

    2014-04-01

    Prehospital first responders historically have treated hypoglycemia in the field with an IV bolus of 50 mL of 50% dextrose solution (D50). The California Contra Costa County Emergency Medical Services (EMS) system recently adopted a protocol of IV 10% dextrose solution (D10), due to frequent shortages and relatively high cost of D50. The feasibility, safety, and efficacy of this approach are reported using the experience of this EMS system. Over the course of 18 weeks, paramedics treated 239 hypoglycemic patients with D10 and recorded patient demographics and clinical outcomes. Of these, 203 patients were treated with 100 mL of D10 initially upon EMS arrival, and full data on response to treatment was available on 164 of the 203 patients. The 164 patients' capillary glucose response to initial infusion of 100 mL of D10 was calculated and a linear regression line fit between elapsed time and difference between initial and repeat glucose values. Feasibility, safety, and the need for repeat glucose infusions were examined. The study cohort included 102 men and 62 women with a median age of 68 years. The median initial field blood glucose was 38 mg/dL, with a subsequent blood glucose median of 98 mg/dL. The median time to second glucose testing was eight minutes after beginning the 100 mL D10 infusion. Of 164 patients, 29 (18%) required an additional dose of IV D10 solution due to persistent or recurrent hypoglycemia, and one patient required a third dose. There were no reported adverse events or deaths related to D10 administration. Linear regression analysis of elapsed time and difference between initial and repeat glucose values showed near-zero correlation. In addition to practical reasons of cost and availability, theoretical risks of using 50 mL of D50 in the out-of-hospital setting include extravasation injury, direct toxic effects of hypertonic dextrose, and potential neurotoxic effects of hyperglycemia. The results of one local EMS system over an 18-week period demonstrate the feasibility, safety, and efficacy of using 100 mL of D10 as an alternative. Additionally, the linear regression line of repeat glucose measurements suggests that there may be little or no short-term decay in blood glucose values after D10 administration.

  3. Repeated Kicking Actions in Karate: Effect on Technical Execution in Elite Practitioners.

    PubMed

    Quinzi, Federico; Camomilla, Valentina; Di Mario, Alberto; Felici, Francesco; Sbriccoli, Paola

    2016-04-01

    Training in martial arts is commonly performed by repeating a technical action continuously for a given number of times. This study aimed to investigate if the repetition of the task alters the proper technical execution, limiting the training efficacy for the technical evaluation during competition. This aim was pursued analyzing lower-limb kinematics and muscle activation during repeated roundhouse kicks. Six junior karate practitioners performed continuously 20 repetitions of the kick. Hip and knee kinematics and sEMG of vastus lateralis, biceps (BF), and rectus femoris were recorded. For each repetition, hip abduction-adduction and flexion-extension and knee flexion-extension peak angular displacements and velocities, agonist and antagonist muscle activation were computed. Moreover, to monitor for the presence of myoelectric fatigue, if any, the median frequency of the sEMG was computed. All variables were normalized with respect to their individual maximum observed during the sequence of kicks. Linear regressions were fitted to each normalized parameter to test its relationship with the repetition number. Linear-regression analysis showed that, during the sequence, the athletes modified their technique: Knee flexion, BF median frequency, hip abduction, knee-extension angular velocity, and BF antagonist activation significantly decreased. Conversely, hip flexion increased significantly. Since karate combat competitions require proper technical execution, training protocols combining severe fatigue and technical actions should be carefully proposed because of technique adaptations. Moreover, trainers and karate masters should consider including specific strength exercises for the BF and more generally for knee flexors.

  4. Micromechanics Fatigue Damage Analysis Modeling for Fabric Reinforced Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Xue, D.; Shi, Y.

    2013-01-01

    A micromechanics analysis modeling method was developed to analyze the damage progression and fatigue failure of fabric reinforced composite structures, especially for the brittle ceramic matrix material composites. A repeating unit cell concept of fabric reinforced composites was used to represent the global composite structure. The thermal and mechanical properties of the repeating unit cell were considered as the same as those of the global composite structure. The three-phase micromechanics, the shear-lag, and the continuum fracture mechanics models were integrated with a statistical model in the repeating unit cell to predict the progressive damages and fatigue life of the composite structures. The global structure failure was defined as the loss of loading capability of the repeating unit cell, which depends on the stiffness reduction due to material slice failures and nonlinear material properties in the repeating unit cell. The present methodology is demonstrated with the analysis results evaluated through the experimental test performed with carbon fiber reinforced silicon carbide matrix plain weave composite specimens.

  5. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods

    PubMed Central

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-01-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452

  6. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.

    PubMed

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-05-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.

  7. A Unified Model for Repeating and Non-repeating Fast Radio Bursts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagchi, Manjari, E-mail: manjari@imsc.res.in

    The model that fast radio bursts (FRBs) are caused by plunges of asteroids onto neutron stars can explain both repeating and non-repeating bursts. If a neutron star passes through an asteroid belt around another star, there would be a series of bursts caused by a series of asteroid impacts. Moreover, the neutron star would cross the same belt repetitively if it were in a binary with the star hosting the asteroid belt, leading to a repeated series of bursts. I explore the properties of neutron star binaries that could lead to the only known repeating FRB so far (FRB121102). Inmore » this model, the next two epochs of bursts are expected around 2017 February 27 and 2017 December 18. On the other hand, if the asteroid belt is located around the neutron star itself, then a chance fall of an asteroid from that belt onto the neutron star would lead to a non-repeating burst. Even a neutron star grazing an asteroid belt can lead to a non-repeating burst caused by just one asteroid plunge during the grazing. This is possible even when the neutron star is in a binary with the asteroid-hosting star, if the belt and the neutron star orbit are non-coplanar.« less

  8. Circularized Chromosome with a Large Palindromic Structure in Streptomyces griseus Mutants

    PubMed Central

    Uchida, Tetsuya; Ishihara, Naoto; Zenitani, Hiroyuki; Hiratsu, Keiichiro; Kinashi, Haruyasu

    2004-01-01

    Streptomyces linear chromosomes display various types of rearrangements after telomere deletion, including circularization, arm replacement, and amplification. We analyzed the new chromosomal deletion mutants Streptomyces griseus 301-22-L and 301-22-M. In these mutants, chromosomal arm replacement resulted in long terminal inverted repeats (TIRs) at both ends; different sizes were deleted again and recombined inside the TIRs, resulting in a circular chromosome with an extremely large palindrome. Short palindromic sequences were found in parent strain 2247, and these sequences might have played a role in the formation of this unique structure. Dynamic structural changes of Streptomyces linear chromosomes shown by this and previous studies revealed extraordinary strategies of members of this genus to keep a functional chromosome, even if it is linear or circular. PMID:15150216

  9. Practical entanglement concentration of nonlocal polarization-spatial hyperentangled states with linear optics

    NASA Astrophysics Data System (ADS)

    Wang, Zi-Hang; Wu, Xiao-Yuan; Yu, Wen-Xuan; Alzahrani, Faris; Hobiny, Aatef; Deng, Fu-Guo

    2017-05-01

    We present some different hyperentanglement concentration protocols (hyper-ECPs) for nonlocal N-photon systems in partially polarization-spatial hyperentangled states with known parameters, resorting to linear optical elements only, including those for hyperentangled Greenberger-Horne-Zeilinger-class states and the ones for hyperentangled cluster-class states. Our hyper-ECPs have some interesting features. First, they require only one copy of nonlocal N-photon systems and do not resort to ancillary photons. Second, they work with linear optical elements, neither Bell-state measurement nor two-qubit entangling gates. Third, they have the maximal success probability with only a round of entanglement concentration, not repeating the concentration process some times. Fourth, they resort to some polarizing beam splitters and wave plates, not unbalanced beam splitters, which make them more convenient in experiment.

  10. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  11. Causes of model dry and warm bias over central U.S. and impact on climate projections.

    PubMed

    Lin, Yanluan; Dong, Wenhao; Zhang, Minghua; Xie, Yuanyu; Xue, Wei; Huang, Jianbin; Luo, Yong

    2017-10-12

    Climate models show a conspicuous summer warm and dry bias over the central United States. Using results from 19 climate models in the Coupled Model Intercomparison Project Phase 5 (CMIP5), we report a persistent dependence of warm bias on dry bias with the precipitation deficit leading the warm bias over this region. The precipitation deficit is associated with the widespread failure of models in capturing strong rainfall events in summer over the central U.S. A robust linear relationship between the projected warming and the present-day warm bias enables us to empirically correct future temperature projections. By the end of the 21st century under the RCP8.5 scenario, the corrections substantially narrow the intermodel spread of the projections and reduce the projected temperature by 2.5 K, resulting mainly from the removal of the warm bias. Instead of a sharp decrease, after this correction the projected precipitation is nearly neutral for all scenarios.Climate models repeatedly show a warm and dry bias over the central United States, but the origin of this bias remains unclear. Here the authors associate this bias to precipitation deficits in models and after applying a correction, projected precipitation in this region shows no significant changes.

  12. Reliability of spatiotemporal and kinetic gait parameters determined by a new instrumented treadmill system.

    PubMed

    Reed, Lloyd F; Urry, Stephen R; Wearing, Scott C

    2013-08-21

    Despite the emerging use of treadmills integrated with pressure platforms as outcome tools in both clinical and research settings, published evidence regarding the measurement properties of these new systems is limited. This study evaluated the within- and between-day repeatability of spatial, temporal and vertical ground reaction force parameters measured by a treadmill system instrumented with a capacitance-based pressure platform. Thirty three healthy adults (mean age, 21.5 ± 2.8 years; height, 168.4 ± 9.9 cm; and mass, 67.8 ± 18.6 kg), walked barefoot on a treadmill system (FDM-THM-S, Zebris Medical GmbH) on three separate occasions. For each testing session, participants set their preferred pace but were blinded to treadmill speed. Spatial (foot rotation, step width, stride and step length), temporal (stride and step times, duration of stance, swing and single and double support) and peak vertical ground reaction force variables were collected over a 30-second capture period, equating to an average of 52 ± 5 steps of steady-state walking. Testing was repeated one week following the initial trial and again, for a third time, 20 minutes later. Repeated measures ANOVAs within a generalized linear modelling framework were used to assess between-session differences in gait parameters. Agreement between gait parameters measured within the same day (session 2 and 3) and between days (session 1 and 2; 1 and 3) were evaluated using the 95% repeatability coefficient. There were statistically significant differences in the majority (14/16) of temporal, spatial and kinetic gait parameters over the three test sessions (P < .01). The minimum change that could be detected with 95% confidence ranged between 3% and 17% for temporal parameters, 14% and 33% for spatial parameters, and 4% and 20% for kinetic parameters between days. Within-day repeatability was similar to that observed between days. Temporal and kinetic gait parameters were typically more consistent than spatial parameters. The 95% repeatability coefficient for vertical force peaks ranged between ± 53 and ± 63 N. The limits of agreement in spatial parameters and ground reaction forces for the treadmill system encompass previously reported changes with neuromuscular pathology and footwear interventions. These findings provide clinicians and researchers with an indication of the repeatability and sensitivity of the Zebris treadmill system to detect changes in common spatiotemporal gait parameters and vertical ground reaction forces.

  13. Exposure to Environmental Ozone Alters Semen Quality

    PubMed Central

    Sokol, Rebecca Z.; Kraft, Peter; Fowler, Ian M.; Mamet, Rizvan; Kim, Elizabeth; Berhane, Kiros T.

    2006-01-01

    Idiopathic male infertility may be due to exposure to environmental toxicants that alter spermatogenesis or sperm function. We studied the relationship between air pollutant levels and semen quality over a 2-year period in Los Angeles, California, by analyzing repeated semen samples collected by sperm donors. Semen analysis data derived from 5,134 semen samples from a sperm donor bank were correlated with air pollutant levels (ozone, nitrogen dioxide, carbon monoxide, and particulate matter < 10 μm in aerodynamic diameter) measured 0–9, 10–14, and 70–90 days before semen collection dates in Los Angeles between January 1996 and December 1998. A linear mixed-effects model was used to model average sperm concentration and total motile sperm count for the donation from each subject. Changes were analyzed in relationship to biologically relevant time points during spermatogenesis, 0–9, 10–14, and 70–90 days before the day of semen collection. We estimated temperature and seasonality effects after adjusting for a base model, which included donor’s date of birth and age at donation. Forty-eight donors from Los Angeles were included as subjects. Donors were included if they collected repeated semen samples over a 12-month period between January 1996 and December 1998. There was a significant negative correlation between ozone levels at 0–9, 10–14, and 70–90 days before donation and average sperm concentration, which was maintained after correction for donor’s birth date, age at donation, temperature, and seasonality (p < 0.01). No other pollutant measures were significantly associated with sperm quality outcomes. Exposure to ambient ozone levels adversely affects semen quality. PMID:16507458

  14. Progression of behavioural despair in R6/2 and Hdh knock-in mouse models recapitulates depression in Huntington's disease.

    PubMed

    Ciamei, Alessandro; Detloff, Peter J; Morton, A Jennifer

    2015-09-15

    In Huntington's disease (HD) depression is observed before the disease is diagnosed, and is likely to be a component of the disease, rather than a consequence. Depression in HD patients does not progress in parallel with other symptoms; rather it peaks at early- to mid-stages of the disease and declines thereafter. In mice, depressive-like behaviours can be measured as an increase in behavioural despair (floating) observed in the forced swim test (FST). Floating in the FST is modulated differently by antidepressants with different mechanisms of action. Drugs that increase levels of serotonin inhibit floating by promoting horizontal swimming, whereas drugs that increase levels of noradrenaline inhibit floating by enhancing vertical swimming (climbing). We compared the FST behavioural profiles of two different allelic series of HD mice, a fragment model (R6/2 mice carrying 120, 250, or 350 CAG repeats), and a knock-in model (Hdh mice carrying 50, 150, or 250 CAG repeats). The FST behavioural profile was similar in both lines. It was characterized by an early-stage increase in floating, and then, as the mice aged, floating decreased, whereas active behaviours of swimming and climbing increased. Our results show that, as with depression in HD patients, floating in HD mice does not progress linearly, suggesting that, at the late stages of the disease, an increase in serotonergic and noradrenergic activity might contribute to lower floating levels in HD mice. If similar compensatory changes occur in humans, this should be taken into account when considering the treatment of depression in HD patients. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Primate Phencyclidine Model of Schizophrenia: Sex-Specific Effects on Cognition, Brain Derived Neurotrophic Factor, Spine Synapses, and Dopamine Turnover in Prefrontal Cortex

    PubMed Central

    Groman, Stephanie M.; Jentsch, James D.; Leranth, Csaba; Redmond, D. Eugene; Kim, Jung D.; Diano, Sabrina; Roth, Robert H.

    2015-01-01

    Background: Cognitive deficits are a core symptom of schizophrenia, yet they remain particularly resistant to treatment. The model provided by repeatedly exposing adult nonhuman primates to phencyclidine has generated important insights into the neurobiology of these deficits, but it remains possible that administration of this psychotomimetic agent during the pre-adult period, when the dorsolateral prefrontal cortex in human and nonhuman primates is still undergoing significant maturation, may provide a greater understanding of schizophrenia-related cognitive deficits. Methods: The effects of repeated phencyclidine treatment on spine synapse number, dopamine turnover and BDNF expression in dorsolateral prefrontal cortex, and working memory accuracy were examined in pre-adult monkeys. Results: One week following phencyclidine treatment, juvenile and adolescent male monkeys demonstrated a greater loss of spine synapses in dorsolateral prefrontal cortex than adult male monkeys. Further studies indicated that in juvenile males, a cognitive deficit existed at 4 weeks following phencyclidine treatment, and this impairment was associated with decreased dopamine turnover, decreased brain derived neurotrophic factor messenger RNA, and a loss of dendritic spine synapses in dorsolateral prefrontal cortex. In contrast, female juvenile monkeys displayed no cognitive deficit at 4 weeks after phencyclidine treatment and no alteration in dopamine turnover or brain derived neurotrophic factor messenger RNA or spine synapse number in dorsolateral prefrontal cortex. In the combined group of male and female juvenile monkeys, significant linear correlations were detected between dopamine turnover, spine synapse number, and cognitive performance. Conclusions: As the incidence of schizophrenia is greater in males than females, these findings support the validity of the juvenile primate phencyclidine model and highlight its potential usefulness in understanding the deficits in dorsolateral prefrontal cortex in schizophrenia and developing novel treatments for the cognitive deficits associated with schizophrenia. PMID:25522392

  16. Genetic Variations in the Androgen Receptor Are Associated with Steroid Concentrations and Anthropometrics but Not with Muscle Mass in Healthy Young Men

    PubMed Central

    De Naeyer, Hélène; Bogaert, Veerle; De Spaey, Annelies; Roef, Greet; Vandewalle, Sara; Derave, Wim; Taes, Youri; Kaufman, Jean-Marc

    2014-01-01

    Objective The relationship between serum testosterone (T) levels, muscle mass and muscle force in eugonadal men is incompletely understood. As polymorphisms in the androgen receptor (AR) gene cause differences in androgen sensitivity, no straightforward correlation can be observed between the interindividual variation in T levels and different phenotypes. Therefore, we aim to investigate the relationship between genetic variations in the AR, circulating androgens and muscle mass and function in young healthy male siblings. Design 677 men (25–45 years) were recruited in a cross-sectional, population-based sibling pair study. Methods Relations between genetic variation in the AR gene (CAGn, GGNn, SNPs), sex steroid levels (by LC-MS/MS), body composition (by DXA), muscle cross-sectional area (CSA) (by pQCT), muscle force (isokinetic peak torque, grip strength) and anthropometrics were studied using linear mixed-effect modelling. Results Muscle mass and force were highly heritable and related to age, physical activity, body composition and anthropometrics. Total T (TT) and free T (FT) levels were positively related to muscle CSA, whereas estradiol (E2) and free E2 (FE2) concentrations were negatively associated with muscle force. Subjects with longer CAG repeat length had higher circulating TT, FT, and higher E2 and FE2 concentrations. Weak associations with TT and FT were found for the rs5965433 and rs5919392 SNP in the AR, whereas no association between GGN repeat polymorphism and T concentrations were found. Arm span and 2D:4D finger length ratio were inversely associated, whereas muscle mass and force were not associated with the number of CAG repeats. Conclusions Age, physical activity, body composition, sex steroid levels and anthropometrics are determinants of muscle mass and function in young men. Although the number of CAG repeats of the AR are related to sex steroid levels and anthropometrics, we have no evidence that these variations in the AR gene also affect muscle mass or function. PMID:24465978

  17. Genetic variations in the androgen receptor are associated with steroid concentrations and anthropometrics but not with muscle mass in healthy young men.

    PubMed

    De Naeyer, Hélène; Bogaert, Veerle; De Spaey, Annelies; Roef, Greet; Vandewalle, Sara; Derave, Wim; Taes, Youri; Kaufman, Jean-Marc

    2014-01-01

    The relationship between serum testosterone (T) levels, muscle mass and muscle force in eugonadal men is incompletely understood. As polymorphisms in the androgen receptor (AR) gene cause differences in androgen sensitivity, no straightforward correlation can be observed between the interindividual variation in T levels and different phenotypes. Therefore, we aim to investigate the relationship between genetic variations in the AR, circulating androgens and muscle mass and function in young healthy male siblings. 677 men (25-45 years) were recruited in a cross-sectional, population-based sibling pair study. Relations between genetic variation in the AR gene (CAGn, GGNn, SNPs), sex steroid levels (by LC-MS/MS), body composition (by DXA), muscle cross-sectional area (CSA) (by pQCT), muscle force (isokinetic peak torque, grip strength) and anthropometrics were studied using linear mixed-effect modelling. Muscle mass and force were highly heritable and related to age, physical activity, body composition and anthropometrics. Total T (TT) and free T (FT) levels were positively related to muscle CSA, whereas estradiol (E2) and free E2 (FE2) concentrations were negatively associated with muscle force. Subjects with longer CAG repeat length had higher circulating TT, FT, and higher E2 and FE2 concentrations. Weak associations with TT and FT were found for the rs5965433 and rs5919392 SNP in the AR, whereas no association between GGN repeat polymorphism and T concentrations were found. Arm span and 2D:4D finger length ratio were inversely associated, whereas muscle mass and force were not associated with the number of CAG repeats. Age, physical activity, body composition, sex steroid levels and anthropometrics are determinants of muscle mass and function in young men. Although the number of CAG repeats of the AR are related to sex steroid levels and anthropometrics, we have no evidence that these variations in the AR gene also affect muscle mass or function.

  18. Ozone exposure, antioxidant genes, and lung function in an elderly cohort: VA Normative Aging Study

    PubMed Central

    Alexeeff, Stacey E.; Litonjua, Augusto A.; Wright, Robert O.; Baccarelli, Andrea; Suh, Helen; Sparrow, David; Vokonas, Pantel S.; Schwartz, Joel

    2008-01-01

    Background Ozone exposure is known to cause oxidative stress. We investigated the acute effects of ozone (O3) on lung function in the elderly, a suspected risk group. We then investigated whether genetic polymorphisms of antioxidant genes (heme oxygenase-1 [HMOX1] and glutathione S-transferase pi [GSTP1]) modified these associations. Methods We studied 1,100 elderly men from the Normative Aging Study whose lung function (forced vital capacity [FVC] and forced expiratory volume in one second [FEV1]) was measured every 3 years from 1995–2005. We genotyped the GSTP1 Ile105Val and Ala114Val polymorphisms and the (GT)n repeat polymorphism in the HMOX1 promoter, classifying repeats as short (n<25) or long (n 25). Ambient O3 was measured continuously at locations in the Greater Boston area. We used mixed linear models, adjusting for known confounders. Results A 15 ppb increase in O3 during the previous 48 hours was associated with a 1.25% decrease in FEV1 (95% CI: −1.96%, −0.54%). This estimated effect was worsened with either the presence of a long (GT)n repeat in HMOX1 (−1.38%, 95% CI: −2.11%, −0.65) or the presence of an allele coding for Val105 in GSTP1 (−1.69%, 95% CI: −2.63%, −0.75). A stronger estimated effect of O3 on FEV1 was found in subjects carrying both the GSTP1 105Val variant and the HMOX1 long (GT)n repeat (−1.94%, 95% CI: −2.89%, −0.98%). Similar associations were also found between FVC and ozone exposure. Conclusions Our results suggest that ozone has an acute effect on lung function in the elderly, and the effects may be modified by the presence of specific polymorphisms in antioxidant genes. PMID:18524839

  19. Inter-method Performance Study of Tumor Volumetry Assessment on Computed Tomography Test-retest Data

    PubMed Central

    Buckler, Andrew J.; Danagoulian, Jovanna; Johnson, Kjell; Peskin, Adele; Gavrielides, Marios A.; Petrick, Nicholas; Obuchowski, Nancy A.; Beaumont, Hubert; Hadjiiski, Lubomir; Jarecha, Rudresh; Kuhnigk, Jan-Martin; Mantri, Ninad; McNitt-Gray, Michael; Moltz, Jan Hendrik; Nyiri, Gergely; Peterson, Sam; Tervé, Pierre; Tietjen, Christian; von Lavante, Etienne; Ma, Xiaonan; Pierre, Samantha St.; Athelogou, Maria

    2015-01-01

    Rationale and objectives Tumor volume change has potential as a biomarker for diagnosis, therapy planning, and treatment response. Precision was evaluated and compared among semi-automated lung tumor volume measurement algorithms from clinical thoracic CT datasets. The results inform approaches and testing requirements for establishing conformance with the Quantitative Imaging Biomarker Alliance (QIBA) CT Volumetry Profile. Materials and Methods Industry and academic groups participated in a challenge study. Intra-algorithm repeatability and inter-algorithm reproducibility were estimated. Relative magnitudes of various sources of variability were estimated using a linear mixed effects model. Segmentation boundaries were compared to provide a basis on which to optimize algorithm performance for developers. Results Intra-algorithm repeatability ranged from 13% (best performing) to 100% (least performing), with most algorithms demonstrating improved repeatability as the tumor size increased. Inter-algorithm reproducibility determined in three partitions and found to be 58% for the four best performing groups, 70% for the set of groups meeting repeatability requirements, and 84% when all groups but the least performer were included. The best performing partition performed markedly better on tumors with equivalent diameters above 40 mm. Larger tumors benefitted by human editing but smaller tumors did not. One-fifth to one-half of the total variability came from sources independent of the algorithms. Segmentation boundaries differed substantially, not just in overall volume but in detail. Conclusions Nine of the twelve participating algorithms pass precision requirements similar to what is indicated in the QIBA Profile, with the caveat that the current study was not designed to explicitly evaluate algorithm Profile conformance. Change in tumor volume can be measured with confidence to within ±14% using any of these nine algorithms on tumor sizes above 10 mm. No partition of the algorithms were able to meet the QIBA requirements for interchangeability down to 10 mm, though the partition comprised of the best performing algorithms did meet this requirement above a tumor size of approximately 40 mm. PMID:26376841

  20. The Elbow-EpiTrainer: a method of delivering graded resistance to the extensor carpi radialis brevis. Effectiveness of a prototype device in a healthy population.

    PubMed

    Navsaria, Rishi; Ryder, Dionne M; Lewis, Jeremy S; Alexander, Caroline M

    2015-03-01

    Tennis elbow or lateral epicondylopathy (LE) is experienced as the lateral elbow has a reported prevalence of 1.3%, with symptoms lasting up to 18 months. LE is most commonly attributed to tendinopathy involving the extensor carpi radialis brevis (ECRB) tendon. The aim of tendinopathy management is to alleviate symptoms and restore function that initially involves relative rest followed by progressive therapeutic exercise. To assess the effectiveness of two prototype exercises using commonly available clinical equipment to progressively increase resistance and activity of the ECRB. Eighteen healthy participants undertook two exercise progressions. Surface electromyography was used to record ECRB activity during the two progressions, involving eccentric exercises of the wrist extensors and elbow pronation exercises using a prototype device. The two progressions were assessed for their linearity of progression using repeated ANOVA and linear regression analysis. Five participants repeated the study to assess reliability. The exercise progressions led to an increase in ECRB electromyographic (EMG) activity (p<0.001). A select progression of exercises combining the two protocols increased EMG activity in a linear fashion (p<0.001). The ICC values indicated good reliability (ICC>0.7) between the first and second tests for five participants. Manipulation of resistance and leverage with the prototype exercises was effective in creating significant increases of ECRB normalised EMG activity in a linear manner that may, with future research, become useful to clinicians treating LE. In addition, between trial reliability for the device to generate a consistent load was acceptable. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  1. Precision-feeding dairy heifers a high rumen-undegradable protein diet with different proportions of dietary fiber and forage-to-concentrate ratios.

    PubMed

    Koch, L E; Gomez, N A; Bowyer, A; Lascano, G J

    2017-12-01

    The addition of dietary fiber can alter nutrient and N utilization in precision-fed dairy heifers and may further benefit from higher inclusion levels of RUP. The objective of this experiment was to determine the effects of feeding a high-RUP diet when dietary fiber content was manipulated within differing forage-to-concentrate ratios (F:C) on nutrient utilization of precision-fed dairy heifers. Six rumen-cannulated Holstein heifers (555.4 ± 31.4 kg BW; 17.4 ± 0.1 mo) were randomly assigned to 2 levels of forage, high forage (HF; 60% forage) or low forage (LF; 45% forage), and to a fiber proportion sequence (low fiber: 100% oat hay and silage [OA], 0% wheat straw [WS]; medium fiber: 83.4% OA, 16.6% WS; and high fiber: 66.7% OA, 33.3% WS) administered according to a split-plot 3 × 3 Latin square design (21-d periods). Similar levels of N intake (1.70 g N/kg BW) and RUP (55% of CP) were provided. Data were analyzed as a split-plot, 3 × 3 Latin square design using a mixed model with fixed effects of period and treatment. A repeated measures model was used with data that had multiple measurements over time. No differences were observed for DM, OM, NDF, or ADF apparent digestibility coefficients (dC) between HF- and LF-fed heifers. Heifers receiving LF diets had greater starch dC compared to HF heifers. Increasing the fiber level through WS addition resulted in a linear reduction of OM dC. There was a linear interaction for DM dC with a concurrent linear interaction in NDF dC. Nitrogen intake, dC, and retention did not differ; however, urine and total N excretion increased linearly with added fiber. Predicted microbial CP flow (MP) linearly decreased with WS inclusion mainly in LF heifers, as indicated by a significant interaction between F:C and WS. Rumen pH linearly increased with WS addition, although no F:C effect was detected. Ruminal ammonia concentration had an opposite linear effect with respect to MP as WS increased. Diets with the higher proportion of fiber benefited the most from a high RUP supply, complementing the substantial reduction in predicted MP caused by the incremental dietary fiber concentration. These results suggest that RUP supplementation is a practical method for reestablishing optimal ruminal N balance in the event of increased dietary fiber through forage inclusion in precision-fed dairy heifer diets.

  2. Kinematic repeatability of a multi-segment foot model for dance.

    PubMed

    Carter, Sarah L; Sato, Nahoko; Hopper, Luke S

    2018-03-01

    The purpose of this study was to determine the intra and inter-assessor repeatability of a modified Rizzoli Foot Model for analysing the foot kinematics of ballet dancers. Six university-level ballet dancers performed the movements; parallel stance, turnout plié, turnout stance, turnout rise and flex-point-flex. The three-dimensional (3D) position of individual reflective markers and marker triads was used to model the movement of the dancers' tibia, entire foot, hindfoot, midfoot, forefoot and hallux. Intra and inter-assessor reliability demonstrated excellent (ICC ≥ 0.75) repeatability for the first metatarsophalangeal joint in the sagittal plane. Intra-assessor reliability demonstrated excellent (ICC ≥ 0.75) repeatability during flex-point-flex across all inter-segmental angles except for the tibia-hindfoot and hindfoot-midfoot frontal planes. Inter-assessor repeatability ranged from poor to excellent (0.5 > ICC ≥ 0.75) for the 3D segment rotations. The most repeatable measure was the tibia-foot dorsiflexion/plantar flexion articulation whereas the least repeatable measure was the hindfoot-midfoot adduction/abduction articulation. The variation found in the inter-assessor results is likely due to inconsistencies in marker placement. This 3D dance specific multi-segment foot model provides insight into which kinematic measures can be reliably used to ascertain in vivo technical errors and/or biomechanical abnormalities in a dancer's foot motion.

  3. Analysis strategies for longitudinal attachment loss data.

    PubMed

    Beck, J D; Elter, J R

    2000-02-01

    The purpose of this invited review is to describe and discuss methods currently in use to quantify the progression of attachment loss in epidemiological studies of periodontal disease, and to make recommendations for specific analytic methods based upon the particular design of the study and structure of the data. The review concentrates on the definition of incident attachment loss (ALOSS) and its component parts; measurement issues including thresholds and regression to the mean; methods of accounting for longitudinal change, including changes in means, changes in proportions of affected sites, incidence density, the effect of tooth loss and reversals, and repeated events; statistical models of longitudinal change, including the incorporation of the time element, use of linear, logistic or Poisson regression or survival analysis, and statistical tests; site vs person level of analysis, including statistical adjustment for correlated data; the strengths and limitations of ALOSS data. Examples from the Piedmont 65+ Dental Study are used to illustrate specific concepts. We conclude that incidence density is the preferred methodology to use for periodontal studies with more than one period of follow-up and that the use of studies not employing methods for dealing with complex samples, correlated data, and repeated measures does not take advantage of our current understanding of the site- and person-level variables important in periodontal disease and may generate biased results.

  4. Pre-treatment red blood cell distribution width provides prognostic information in multiple myeloma.

    PubMed

    Zhou, Di; Xu, Peipei; Peng, Miaoxin; Shao, Xiaoyan; Wang, Miao; Ouyang, Jian; Chen, Bing

    2018-06-01

    The red blood cell distribution width (RDW), a credible marker for abnormal erythropoiesis, has recently been studied as a prognostic factor in oncology, but its role in multiple myeloma (MM) hasn't been thoroughly investigated. We performed a retrospective study in 162 patients with multiple myeloma. Categorical parameters were analyzed using Pearson chi-squared test. The Mann-Whitney and Wilcoxon tests were used for group comparisons. Comparisons of repeated samples data were analyzed with the general linear model repeated-measures procedure. The Kaplan-Meier product-limit method was used to determine OS and PFS, and the differences were assessed by the log-rank test. High RDW baseline was significantly associated with indexes including haemoglobin, bone marrow plasma cell infiltration, and cytogenetics risk stratification. After chemotherapy, the overall response rate (ORR) decreased as RDW baseline increased. In 24 patients with high RDW baseline, it was revealed RDW value decreased when patients achieved complete remission (CR), but increased when the disease progressed. The normal-RDW baseline group showed both longer overall survival (OS) and progression-free survival (PFS) than the high-RDW baseline group. Our study suggests pre-treatment RDW level is a prognostic factor in MM and should be regarded as an important parameter for assessment of therapeutic efficiency. Copyright © 2018. Published by Elsevier B.V.

  5. Interpreting Repeated Temperature-Depth Profiles for Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Bense, Victor F.; Kurylyk, Barret L.; van Daal, Jonathan; van der Ploeg, Martine J.; Carey, Sean K.

    2017-10-01

    Temperature can be used to trace groundwater flows due to thermal disturbances of subsurface advection. Prior hydrogeological studies that have used temperature-depth profiles to estimate vertical groundwater fluxes have either ignored the influence of climate change by employing steady-state analytical solutions or applied transient techniques to study temperature-depth profiles recorded at only a single point in time. Transient analyses of a single profile are predicated on the accurate determination of an unknown profile at some time in the past to form the initial condition. In this study, we use both analytical solutions and a numerical model to demonstrate that boreholes with temperature-depth profiles recorded at multiple times can be analyzed to either overcome the uncertainty associated with estimating unknown initial conditions or to form an additional check for the profile fitting. We further illustrate that the common approach of assuming a linear initial temperature-depth profile can result in significant errors for groundwater flux estimates. Profiles obtained from a borehole in the Veluwe area, Netherlands in both 1978 and 2016 are analyzed for an illustrative example. Since many temperature-depth profiles were collected in the late 1970s and 1980s, these previously profiled boreholes represent a significant and underexploited opportunity to obtain repeat measurements that can be used for similar analyses at other sites around the world.

  6. DNA residence time is a regulatory factor of transcription repression

    PubMed Central

    Clauß, Karen; Popp, Achim P.; Schulze, Lena; Hettich, Johannes; Reisser, Matthias; Escoter Torres, Laura; Uhlenhaut, N. Henriette

    2017-01-01

    Abstract Transcription comprises a highly regulated sequence of intrinsically stochastic processes, resulting in bursts of transcription intermitted by quiescence. In transcription activation or repression, a transcription factor binds dynamically to DNA, with a residence time unique to each factor. Whether the DNA residence time is important in the transcription process is unclear. Here, we designed a series of transcription repressors differing in their DNA residence time by utilizing the modular DNA binding domain of transcription activator-like effectors (TALEs) and varying the number of nucleotide-recognizing repeat domains. We characterized the DNA residence times of our repressors in living cells using single molecule tracking. The residence times depended non-linearly on the number of repeat domains and differed by more than a factor of six. The factors provoked a residence time-dependent decrease in transcript level of the glucocorticoid receptor-activated gene SGK1. Down regulation of transcription was due to a lower burst frequency in the presence of long binding repressors and is in accordance with a model of competitive inhibition of endogenous activator binding. Our single molecule experiments reveal transcription factor DNA residence time as a regulatory factor controlling transcription repression and establish TALE-DNA binding domains as tools for the temporal dissection of transcription regulation. PMID:28977492

  7. The Occurrence of Repeated High Acceleration Ability (RHAA) in Elite Youth Football.

    PubMed

    Serpiello, Fabio R; Duthie, Grant M; Moran, Codey; Kovacevic, Damian; Selimi, Erch; Varley, Matthew C

    2018-06-05

    The aim of the present study was to investigate the occurrence of Repeated High-Acceleration Ability (RHAA) bouts in elite youth football games using 10-Hz GPS devices and two relative thresholds derived from players' actual maximal acceleration. Thirty-six outfield soccer players (age 14.9±0.6 years) participated in the study. Players wore 10-Hz GPS units during 41 official games. High accelerations were defined as efforts commencing above a threshold corresponding to 70% (T70%) or 80% (T80%) of the average 5-m acceleration obtained during a 40-m sprint test; RHAA bouts were defined as ≥3 efforts with ≤45 s recovery between efforts. Results were analysed via generalised linear mixed model and magnitude-based inferential statistics. On average, 8.0±4.6 and 5.1±3.5 bouts were detected in an entire game using T70% and T80%, respectively. When all positions were analysed together, there was a very-likely small difference in the number of RHAA bouts between first and second half for T70% and T80%, respectively. RHAA bouts occur frequently in elite youth football, with small differences between halves and between playing positions within the first or second half in most variables assessed. © Georg Thieme Verlag KG Stuttgart · New York.

  8. Message Framing and Physical Activity Promotion in Colorectal Cancer Survivors.

    PubMed

    Hirschey, Rachel; Lipkus, Isaac; Jones, Lee; Mantyh, Christopher; Sloane, Richard; Demark-Wahnefried, Wendy

    2016-11-01

    To test effects of gain-framed versus loss-framed mailed brochures on increasing physical activity (PA) among colorectal cancer (CRC) survivors.
. Randomized trial with repeated measures at baseline, 1 month, and 12 months postintervention.
. Mail recruitment from tumor registries.
. 148 inactive CRC survivors who had completed primary therapy. 
. PA and constructs from the Theory of Planned Behavior (TPB) were assessed at baseline, 1 month, and 12 months. Participants were randomized to receive pamphlets describing PA benefits (gain framed) or disadvantages of not being physically active (loss framed). Baseline characteristics were compared using descriptive statistics. Repeated measures linear models were used to test PA changes.
. Minutes of PA and TPB constructs.
. Significant PA increases were observed in both study arms. Results did not differ by message frame. At one month, about 25% of previously inactive participants increased activity to national recommendations. Those who increased PA compared to those who did not had higher baseline scores on subjective norms, perceived behavioral control, and PA intentions. 
. Independent of message framing, mailed brochures are highly effective in producing within-subject short- and long-term increases in PA.
. CRC survivors may increase short- and long-term levels of PA by receiving inexpensive print brochures.

  9. Proton and metal ion binding to natural organic polyelectrolytes-I. Studies with synthetic model compounds

    USGS Publications Warehouse

    Marinsky, J.A.; Reddy, M.M.

    1984-01-01

    A unified physico-chemical model, based on a modified Henderson-Hasselbalch equation, for the analysis of ion complexation reactions involving charged polymeric systems is presented and verified. In this model pH = pKa+p(??Ka) + log(??/1 - ??) where Ka is the intrinsic acid dissociation constant of the ionizable functional groups on the polymer, ??Ka is the deviation of the intrinsic constant due to electrostatic interaction between the hydrogen ion and the polyanion, and alpha (??) is the polyacid degree of ionization. Using this approach pKa values for repeating acidic units of polyacrylic (PAA) and polymethacrylic (PMA) acids were found to be 4.25 ?? 0.03 and 4.8 ?? 0.1, respectively. The polyion electrostatic deviation term derived from the potentiometric titration data (i.e. p(??Ka)) is used to calculate metal ion concentration at the complexation site on the surface of the polyanion. Intrinsic cobalt-polycarboxylate binding constants (7.5 for PAA and 5.6 for PMA), obtained using this procedure, are consistent with the range of published binding constants for cobalt-monomer carboxylate complexes. In two phase systems incorporation of a Donnan membrane potential term allows determination of the intrinsic pKa of a cross-linked PMA gel, pKa = 4.83, in excellent agreement with the value obtained for the linear polyelectrolyte and the monomer. Similarly, the intrinsic stability constant for cobalt ion binding to a PMA-gel (??CoPMA+ = 11) was found to be in agreement with the linear polyelectrolyte analogue and the published data for cobalt-carboxylate monodentate complexes. ?? 1984.

  10. The Effects of Differential Learning and Traditional Learning Trainings on Technical Development of Football Players

    ERIC Educational Resources Information Center

    Bozkurt, Sinan

    2018-01-01

    There are several different methods of learning motor skills, like traditional (linear) and differential (nonlinear) learning training. The traditional motor learning approach proposes that learners improve a skill just by repeating it. According to the teaching principles, exercises are selected along continua from easy to hard and from simple to…

  11. Estimation of inhalation flow profile using audio-based methods to assess inhaler medication adherence.

    PubMed

    Taylor, Terence E; Lacalle Muls, Helena; Costello, Richard W; Reilly, Richard B

    2018-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence.

  12. Estimation of inhalation flow profile using audio-based methods to assess inhaler medication adherence

    PubMed Central

    Lacalle Muls, Helena; Costello, Richard W.; Reilly, Richard B.

    2018-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence. PMID:29346430

  13. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.

  14. Prediction and repeatability of milk coagulation properties and curd-firming modeling parameters of ovine milk using Fourier-transform infrared spectroscopy and Bayesian models.

    PubMed

    Ferragina, A; Cipolat-Gotet, C; Cecchinato, A; Pazzola, M; Dettori, M L; Vacca, G M; Bittante, G

    2017-05-01

    The aim of this study was to apply Bayesian models to the Fourier-transform infrared spectroscopy spectra of individual sheep milk samples to derive calibration equations to predict traditional and modeled milk coagulation properties (MCP), and to assess the repeatability of MCP measures and their predictions. Data consisted of 1,002 individual milk samples collected from Sarda ewes reared in 22 farms in the region of Sardinia (Italy) for which MCP and modeled curd-firming parameters were available. Two milk samples were taken from 87 ewes and analyzed with the aim of estimating repeatability, whereas a single sample was taken from the other 915 ewes. Therefore, a total of 1,089 analyses were performed. For each sample, 2 spectra in the infrared region 5,011 to 925 cm -1 were available and averaged before data analysis. BayesB models were used to calibrate equations for each of the traits. Prediction accuracy was estimated for each trait and model using 20 replicates of a training-testing validation procedure. The repeatability of MCP measures and their predictions were also compared. The correlations between measured and predicted traits, in the external validation, were always higher than 0.5 (0.88 for rennet coagulation time). We confirmed that the most important element for finding the prediction accuracy is the repeatability of the gold standard analyses used for building calibration equations. Repeatability measures of the predicted traits were generally high (≥95%), even for those traits with moderate analytical repeatability. Our results show that Bayesian models applied to Fourier-transform infrared spectra are powerful tools for cheap and rapid prediction of important traits in ovine milk and, compared with other methods, could help in the interpretation of results. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Brain Vulnerability to Repeated Blast Overpressure and Polytrauma

    DTIC Science & Technology

    2015-10-01

    characterization of the mouse model of repeated blast also found no cumula- tive effect of repeated blast on cortical levels of reactive oxygen species [39]. C...overpressure in rats to investigate the cumulative effects of multiple blast exposures on neurologic status, neurobehavioral function, and brain...preclinical model of blast overpressure in rats to investigate the cumulative effects of multiple blast exposures using neurological, neurochemical

  16. The Relationship Between Oxygen Reserve Index and Arterial Partial Pressure of Oxygen During Surgery

    PubMed Central

    Dorotta, Ihab L.; Wells, Briana; Juma, David; Applegate, Patricia M.

    2016-01-01

    BACKGROUND: The use of intraoperative pulse oximetry (Spo2) enhances hypoxia detection and is associated with fewer perioperative hypoxic events. However, Spo2 may be reported as 98% when arterial partial pressure of oxygen (Pao2) is as low as 70 mm Hg. Therefore, Spo2 may not provide advance warning of falling arterial oxygenation until Pao2 approaches this level. Multiwave pulse co-oximetry can provide a calculated oxygen reserve index (ORI) that may add to information from pulse oximetry when Spo2 is >98%. This study evaluates the ORI to Pao2 relationship during surgery. METHODS: We studied patients undergoing scheduled surgery in which arterial catheterization and intraoperative arterial blood gas analysis were planned. Data from multiple pulse co-oximetry sensors on each patient were continuously collected and stored on a research computer. Regression analysis was used to compare ORI with Pao2 obtained from each arterial blood gas measurement and changes in ORI with changes in Pao2 from sequential measurements. Linear mixed-effects regression models for repeated measures were then used to account for within-subject correlation across the repeatedly measured Pao2 and ORI and for the unequal time intervals of Pao2 determination over elapsed surgical time. Regression plots were inspected for ORI values corresponding to Pao2 of 100 and 150 mm Hg. ORI and Pao2 were compared using mixed-effects models with a subject-specific random intercept. RESULTS: ORI values and Pao2 measurements were obtained from intraoperative data collected from 106 patients. Regression analysis showed that the ORI to Pao2 relationship was stronger for Pao2 to 240 mm Hg (r2 = 0.536) than for Pao2 over 240 mm Hg (r2 = 0.0016). Measured Pao2 was ≥100 mm Hg for all ORI over 0.24. Measured Pao2 was ≥150 mm Hg in 96.6% of samples when ORI was over 0.55. A random intercept variance component linear mixed-effects model for repeated measures indicated that Pao2 was significantly related to ORI (β[95% confidence interval] = 0.002 [0.0019–0.0022]; P < 0.0001). A similar analysis indicated a significant relationship between change in Pao2 and change in ORI (β [95% confidence interval] = 0.0044 [0.0040–0.0048]; P < 0.0001). CONCLUSIONS: These findings suggest that ORI >0.24 can distinguish Pao2 ≥100 mm Hg when Spo2 is over 98%. Similarly, ORI > 0.55 appears to be a threshold to distinguish Pao2 ≥150 mm Hg. The usefulness of these values should be evaluated prospectively. Decreases in ORI to near 0.24 may provide advance indication of falling Pao2 approaching 100 mm Hg when Spo2 is >98%. The clinical utility of interventions based on continuous ORI monitoring should be studied prospectively. PMID:27007078

  17. The Relationship Between Oxygen Reserve Index and Arterial Partial Pressure of Oxygen During Surgery.

    PubMed

    Applegate, Richard L; Dorotta, Ihab L; Wells, Briana; Juma, David; Applegate, Patricia M

    2016-09-01

    The use of intraoperative pulse oximetry (SpO2) enhances hypoxia detection and is associated with fewer perioperative hypoxic events. However, SpO2 may be reported as 98% when arterial partial pressure of oxygen (PaO2) is as low as 70 mm Hg. Therefore, SpO2 may not provide advance warning of falling arterial oxygenation until PaO2 approaches this level. Multiwave pulse co-oximetry can provide a calculated oxygen reserve index (ORI) that may add to information from pulse oximetry when SpO2 is >98%. This study evaluates the ORI to PaO2 relationship during surgery. We studied patients undergoing scheduled surgery in which arterial catheterization and intraoperative arterial blood gas analysis were planned. Data from multiple pulse co-oximetry sensors on each patient were continuously collected and stored on a research computer. Regression analysis was used to compare ORI with PaO2 obtained from each arterial blood gas measurement and changes in ORI with changes in PaO2 from sequential measurements. Linear mixed-effects regression models for repeated measures were then used to account for within-subject correlation across the repeatedly measured PaO2 and ORI and for the unequal time intervals of PaO2 determination over elapsed surgical time. Regression plots were inspected for ORI values corresponding to PaO2 of 100 and 150 mm Hg. ORI and PaO2 were compared using mixed-effects models with a subject-specific random intercept. ORI values and PaO2 measurements were obtained from intraoperative data collected from 106 patients. Regression analysis showed that the ORI to PaO2 relationship was stronger for PaO2 to 240 mm Hg (r = 0.536) than for PaO2 over 240 mm Hg (r = 0.0016). Measured PaO2 was ≥100 mm Hg for all ORI over 0.24. Measured PaO2 was ≥150 mm Hg in 96.6% of samples when ORI was over 0.55. A random intercept variance component linear mixed-effects model for repeated measures indicated that PaO2 was significantly related to ORI (β[95% confidence interval] = 0.002 [0.0019-0.0022]; P < 0.0001). A similar analysis indicated a significant relationship between change in PaO2 and change in ORI (β [95% confidence interval] = 0.0044 [0.0040-0.0048]; P < 0.0001). These findings suggest that ORI >0.24 can distinguish PaO2 ≥100 mm Hg when SpO2 is over 98%. Similarly, ORI > 0.55 appears to be a threshold to distinguish PaO2 ≥150 mm Hg. The usefulness of these values should be evaluated prospectively. Decreases in ORI to near 0.24 may provide advance indication of falling PaO2 approaching 100 mm Hg when SpO2 is >98%. The clinical utility of interventions based on continuous ORI monitoring should be studied prospectively.

  18. Use of random regression to estimate genetic parameters of temperament across an age continuum in a crossbred cattle population.

    PubMed

    Littlejohn, B P; Riley, D G; Welsh, T H; Randel, R D; Willard, S T; Vann, R C

    2018-05-12

    The objective was to estimate genetic parameters of temperament in beef cattle across an age continuum. The population consisted predominantly of Brahman-British crossbred cattle. Temperament was quantified by: 1) pen score (PS), the reaction of a calf to a single experienced evaluator on a scale of 1 to 5 (1 = calm, 5 = excitable); 2) exit velocity (EV), the rate (m/sec) at which a calf traveled 1.83 m upon exiting a squeeze chute; and 3) temperament score (TS), the numerical average of PS and EV. Covariates included days of age and proportion of Bos indicus in the calf and dam. Random regression models included the fixed effects determined from the repeated measures models, except for calf age. Likelihood ratio tests were used to determine the most appropriate random structures. In repeated measures models, the proportion of Bos indicus in the calf was positively related with each calf temperament trait (0.41 ± 0.20, 0.85 ± 0.21, and 0.57 ± 0.18 for PS, EV, and TS, respectively; P < 0.01). There was an effect of contemporary group (combinations of season, year of birth, and management group) and dam age (P < 0.001) in all models. From repeated records analyses, estimates of heritability (h2) were 0.34 ± 0.04, 0.31 ± 0.04, and 0.39 ± 0.04, while estimates of permanent environmental variance as a proportion of the phenotypic variance (c2) were 0.30 ± 0.04, 0.31 ± 0.03, and 0.34 ± 0.04 for PS, EV, and TS, respectively. Quadratic additive genetic random regressions on Legendre polynomials of age were significant for all traits. Quadratic permanent environmental random regressions were significant for PS and TS, but linear permanent environmental random regressions were significant for EV. Random regression results suggested that these components change across the age dimension of these data. There appeared to be an increasing influence of permanent environmental effects and decreasing influence of additive genetic effects corresponding to increasing calf age for EV, and to a lesser extent for TS. Inherited temperament may be overcome by accumulating environmental stimuli with increases in age, especially after weaning.

  19. Effect Sizes for Growth-Modeling Analysis for Controlled Clinical Trials in the Same Metric as for Classical Analysis

    PubMed Central

    Feingold, Alan

    2009-01-01

    The use of growth-modeling analysis (GMA)--including Hierarchical Linear Models, Latent Growth Models, and General Estimating Equations--to evaluate interventions in psychology, psychiatry, and prevention science has grown rapidly over the last decade. However, an effect size associated with the difference between the trajectories of the intervention and control groups that captures the treatment effect is rarely reported. This article first reviews two classes of formulas for effect sizes associated with classical repeated-measures designs that use the standard deviation of either change scores or raw scores for the denominator. It then broadens the scope to subsume GMA, and demonstrates that the independent groups, within-subjects, pretest-posttest control-group, and GMA designs all estimate the same effect size when the standard deviation of raw scores is uniformly used. Finally, it is shown that the correct effect size for treatment efficacy in GMA--the difference between the estimated means of the two groups at end of study (determined from the coefficient for the slope difference and length of study) divided by the baseline standard deviation--is not reported in clinical trials. PMID:19271847

  20. Measurement uncertainty and feasibility study of a flush airdata system for a hypersonic flight experiment

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.

    1994-01-01

    Presented is a feasibility and error analysis for a hypersonic flush airdata system on a hypersonic flight experiment (HYFLITE). HYFLITE heating loads make intrusive airdata measurement impractical. Although this analysis is specifically for the HYFLITE vehicle and trajectory, the problems analyzed are generally applicable to hypersonic vehicles. A layout of the flush-port matrix is shown. Surface pressures are related airdata parameters using a simple aerodynamic model. The model is linearized using small perturbations and inverted using nonlinear least-squares. Effects of various error sources on the overall uncertainty are evaluated using an error simulation. Error sources modeled include boundarylayer/viscous interactions, pneumatic lag, thermal transpiration in the sensor pressure tubing, misalignment in the matrix layout, thermal warping of the vehicle nose, sampling resolution, and transducer error. Using simulated pressure data for input to the estimation algorithm, effects caused by various error sources are analyzed by comparing estimator outputs with the original trajectory. To obtain ensemble averages the simulation is run repeatedly and output statistics are compiled. Output errors resulting from the various error sources are presented as a function of Mach number. Final uncertainties with all modeled error sources included are presented as a function of Mach number.

  1. Multivariate-$t$ nonlinear mixed models with application to censored multi-outcome AIDS studies.

    PubMed

    Lin, Tsung-I; Wang, Wan-Lun

    2017-10-01

    In multivariate longitudinal HIV/AIDS studies, multi-outcome repeated measures on each patient over time may contain outliers, and the viral loads are often subject to a upper or lower limit of detection depending on the quantification assays. In this article, we consider an extension of the multivariate nonlinear mixed-effects model by adopting a joint multivariate-$t$ distribution for random effects and within-subject errors and taking the censoring information of multiple responses into account. The proposed model is called the multivariate-$t$ nonlinear mixed-effects model with censored responses (MtNLMMC), allowing for analyzing multi-outcome longitudinal data exhibiting nonlinear growth patterns with censorship and fat-tailed behavior. Utilizing the Taylor-series linearization method, a pseudo-data version of expectation conditional maximization either (ECME) algorithm is developed for iteratively carrying out maximum likelihood estimation. We illustrate our techniques with two data examples from HIV/AIDS studies. Experimental results signify that the MtNLMMC performs favorably compared to its Gaussian analogue and some existing approaches. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Using GOMS models and hypertext to create representations of medical procedures for online display

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo; Halgren, Shannon; Gosbee, John; Rudisill, Marianne

    1991-01-01

    This study investigated two methods to improve organization and presentation of computer-based medical procedures. A literature review suggested that the GOMS (goals, operators, methods, and selecton rules) model can assist in rigorous task analysis, which can then help generate initial design ideas for the human-computer interface. GOMS model are hierarchical in nature, so this study also investigated the effect of hierarchical, hypertext interfaces. We used a 2 x 2 between subjects design, including the following independent variables: procedure organization - GOMS model based vs. medical-textbook based; navigation type - hierarchical vs. linear (booklike). After naive subjects studies the online procedures, measures were taken of their memory for the content and the organization of the procedures. This design was repeated for two medical procedures. For one procedure, subjects who studied GOMS-based and hierarchical procedures remembered more about the procedures than other subjects. The results for the other procedure were less clear. However, data for both procedures showed a 'GOMSification effect'. That is, when asked to do a free recall of a procedure, subjects who had studies a textbook procedure often recalled key information in a location inconsistent with the procedure they actually studied, but consistent with the GOMS-based procedure.

  3. Characterization of single particle aerosols by elastic light scattering at multiple wavelengths

    NASA Astrophysics Data System (ADS)

    Lane, P. A.; Hart, M. B.; Jain, V.; Tucker, J. E.; Eversole, J. D.

    2018-03-01

    We describe a system to characterize individual aerosol particles using stable and repeatable measurement of elastic light scattering. The method employs a linear electrodynamic quadrupole (LEQ) particle trap. Charged particles, continuously injected by electrospray into this system, are confined to move vertically along the stability line in the center of the LEQ past a point where they are optically interrogated. Light scattered in the near forward direction was measured at three different wavelengths using time-division multiplexed collinear laser beams. We validated our method by comparing measured silica microsphere data for four selected diameters (0.7, 1.0, 1.5 and 2.0 μm) to a model of collected scattered light intensities based upon Lorenz-Mie scattering theory. Scattered light measurements at the different wavelengths are correlated, allowing us to distinguish and classify inhomogeneous particles.

  4. Multiple spatially localized dynamical states in friction-excited oscillator chains

    NASA Astrophysics Data System (ADS)

    Papangelo, A.; Hoffmann, N.; Grolet, A.; Stender, M.; Ciavarella, M.

    2018-03-01

    Friction-induced vibrations are known to affect many engineering applications. Here, we study a chain of friction-excited oscillators with nearest neighbor elastic coupling. The excitation is provided by a moving belt which moves at a certain velocity vd while friction is modelled with an exponentially decaying friction law. It is shown that in a certain range of driving velocities, multiple stable spatially localized solutions exist whose dynamical behavior (i.e. regular or irregular) depends on the number of oscillators involved in the vibration. The classical non-repeatability of friction-induced vibration problems can be interpreted in light of those multiple stable dynamical states. These states are found within a "snaking-like" bifurcation pattern. Contrary to the classical Anderson localization phenomenon, here the underlying linear system is perfectly homogeneous and localization is solely triggered by the friction nonlinearity.

  5. Earthquake source parameters determined by the SAFOD Pilot Hole seismic array

    USGS Publications Warehouse

    Imanishi, K.; Ellsworth, W.L.; Prejean, S.G.

    2004-01-01

    We estimate the source parameters of #3 microearthquakes by jointly analyzing seismograms recorded by the 32-level, 3-component seismic array installed in the SAFOD Pilot Hole. We applied an inversion procedure to estimate spectral parameters for the omega-square model (spectral level and corner frequency) and Q to displacement amplitude spectra. Because we expect spectral parameters and Q to vary slowly with depth in the well, we impose a smoothness constraint on those parameters as a function of depth using a linear first-differenfee operator. This method correctly resolves corner frequency and Q, which leads to a more accurate estimation of source parameters than can be obtained from single sensors. The stress drop of one example of the SAFOD target repeating earthquake falls in the range of typical tectonic earthquakes. Copyright 2004 by the American Geophysical Union.

  6. C9ORF72 hexanucleotide repeat exerts toxicity in a stable, inducible motor neuronal cell model, which is rescued by partial depletion of Pten

    PubMed Central

    Stopford, Matthew J.; Higginbottom, Adrian; Hautbergue, Guillaume M.; Cooper-Knock, Johnathan; Mulcahy, Padraig J.; De Vos, Kurt J.; Renton, Alan E.; Pliner, Hannah; Calvo, Andrea; Chio, Adriano; Traynor, Bryan J.; Azzouz, Mimoun; Heath, Paul R.; Kirby, Janine

    2017-01-01

    Abstract Amyotrophic lateral sclerosis (ALS) is a devastating and incurable neurodegenerative disease, characterised by progressive failure of the neuromuscular system. A (G4C2)n repeat expansion in C9ORF72 is the most common genetic cause of ALS and frontotemporal dementia (FTD). To date, the balance of evidence indicates that the (G4C2)n repeat causes toxicity and neurodegeneration via a gain-of-toxic function mechanism; either through direct RNA toxicity or through the production of toxic aggregating dipeptide repeat proteins. Here, we have generated a stable and isogenic motor neuronal NSC34 cell model with inducible expression of a (G4C2)102 repeat, to investigate the gain-of-toxic function mechanisms. The expression of the (G4C2)102 repeat produces RNA foci and also undergoes RAN translation. In addition, the expression of the (G4C2)102 repeat shows cellular toxicity. Through comparison of transcriptomic data from the cellular model with laser-captured spinal motor neurons from C9ORF72-ALS cases, we also demonstrate that the PI3K/Akt cell survival signalling pathway is dysregulated in both systems. Furthermore, partial knockdown of Pten rescues the toxicity observed in the NSC34 (G4C2)102 cellular gain-of-toxic function model of C9ORF72-ALS. Our data indicate that PTEN may provide a potential therapeutic target to ameliorate toxic effects of the (G4C2)n repeat. PMID:28158451

  7. Use of Repeated Blood Pressure and Cholesterol Measurements to Improve Cardiovascular Disease Risk Prediction: An Individual-Participant-Data Meta-Analysis

    PubMed Central

    Barrett, Jessica; Pennells, Lisa; Sweeting, Michael; Willeit, Peter; Di Angelantonio, Emanuele; Gudnason, Vilmundur; Nordestgaard, Børge G.; Psaty, Bruce M; Goldbourt, Uri; Best, Lyle G; Assmann, Gerd; Salonen, Jukka T; Nietert, Paul J; Verschuren, W. M. Monique; Brunner, Eric J; Kronmal, Richard A; Salomaa, Veikko; Bakker, Stephan J L; Dagenais, Gilles R; Sato, Shinichi; Jansson, Jan-Håkan; Willeit, Johann; Onat, Altan; de la Cámara, Agustin Gómez; Roussel, Ronan; Völzke, Henry; Dankner, Rachel; Tipping, Robert W; Meade, Tom W; Donfrancesco, Chiara; Kuller, Lewis H; Peters, Annette; Gallacher, John; Kromhout, Daan; Iso, Hiroyasu; Knuiman, Matthew; Casiglia, Edoardo; Kavousi, Maryam; Palmieri, Luigi; Sundström, Johan; Davis, Barry R; Njølstad, Inger; Couper, David; Danesh, John; Thompson, Simon G; Wood, Angela

    2017-01-01

    Abstract The added value of incorporating information from repeated blood pressure and cholesterol measurements to predict cardiovascular disease (CVD) risk has not been rigorously assessed. We used data on 191,445 adults from the Emerging Risk Factors Collaboration (38 cohorts from 17 countries with data encompassing 1962–2014) with more than 1 million measurements of systolic blood pressure, total cholesterol, and high-density lipoprotein cholesterol. Over a median 12 years of follow-up, 21,170 CVD events occurred. Risk prediction models using cumulative mean values of repeated measurements and summary measures from longitudinal modeling of the repeated measurements were compared with models using measurements from a single time point. Risk discrimination (C-index) and net reclassification were calculated, and changes in C-indices were meta-analyzed across studies. Compared with the single-time-point model, the cumulative means and longitudinal models increased the C-index by 0.0040 (95% confidence interval (CI): 0.0023, 0.0057) and 0.0023 (95% CI: 0.0005, 0.0042), respectively. Reclassification was also improved in both models; compared with the single-time-point model, overall net reclassification improvements were 0.0369 (95% CI: 0.0303, 0.0436) for the cumulative-means model and 0.0177 (95% CI: 0.0110, 0.0243) for the longitudinal model. In conclusion, incorporating repeated measurements of blood pressure and cholesterol into CVD risk prediction models slightly improves risk prediction. PMID:28549073

  8. Determining A Purely Symbolic Transfer Function from Symbol Streams: Theory and Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, Christopher H

    Transfer function modeling is a \\emph{standard technique} in classical Linear Time Invariant and Statistical Process Control. The work of Box and Jenkins was seminal in developing methods for identifying parameters associated with classicalmore » $(r,s,k)$$ transfer functions. Discrete event systems are often \\emph{used} for modeling hybrid control structures and high-level decision problems. \\emph{Examples include} discrete time, discrete strategy repeated games. For these games, a \\emph{discrete transfer function in the form of} an accurate hidden Markov model of input-output relations \\emph{could be used to derive optimal response strategies.} In this paper, we develop an algorithm \\emph{for} creating probabilistic \\textit{Mealy machines} that act as transfer function models for discrete event dynamic systems (DEDS). Our models are defined by three parameters, $$(l_1, l_2, k)$ just as the Box-Jenkins transfer function models. Here $$l_1$$ is the maximal input history lengths to consider, $$l_2$$ is the maximal output history lengths to consider and $k$ is the response lag. Using related results, We show that our Mealy machine transfer functions are optimal in the sense that they maximize the mutual information between the current known state of the DEDS and the next observed input/output pair.« less

  9. On estimating probability of presence from use-availability or presence-background data.

    PubMed

    Phillips, Steven J; Elith, Jane

    2013-06-01

    A fundamental ecological modeling task is to estimate the probability that a species is present in (or uses) a site, conditional on environmental variables. For many species, available data consist of "presence" data (locations where the species [or evidence of it] has been observed), together with "background" data, a random sample of available environmental conditions. Recently published papers disagree on whether probability of presence is identifiable from such presence-background data alone. This paper aims to resolve the disagreement, demonstrating that additional information is required. We defined seven simulated species representing various simple shapes of response to environmental variables (constant, linear, convex, unimodal, S-shaped) and ran five logistic model-fitting methods using 1000 presence samples and 10 000 background samples; the simulations were repeated 100 times. The experiment revealed a stark contrast between two groups of methods: those based on a strong assumption that species' true probability of presence exactly matches a given parametric form had highly variable predictions and much larger RMS error than methods that take population prevalence (the fraction of sites in which the species is present) as an additional parameter. For six species, the former group grossly under- or overestimated probability of presence. The cause was not model structure or choice of link function, because all methods were logistic with linear and, where necessary, quadratic terms. Rather, the experiment demonstrates that an estimate of prevalence is not just helpful, but is necessary (except in special cases) for identifying probability of presence. We therefore advise against use of methods that rely on the strong assumption, due to Lele and Keim (recently advocated by Royle et al.) and Lancaster and Imbens. The methods are fragile, and their strong assumption is unlikely to be true in practice. We emphasize, however, that we are not arguing against standard statistical methods such as logistic regression, generalized linear models, and so forth, none of which requires the strong assumption. If probability of presence is required for a given application, there is no panacea for lack of data. Presence-background data must be augmented with an additional datum, e.g., species' prevalence, to reliably estimate absolute (rather than relative) probability of presence.

  10. Relationship between changes in vasomotor symptoms and changes in menopause-specific quality of life and sleep parameters.

    PubMed

    Pinkerton, JoAnn V; Abraham, Lucy; Bushmakin, Andrew G; Cappelleri, Joseph C; Komm, Barry S

    2016-10-01

    This study characterizes and quantifies the relationship of vasomotor symptoms (VMS) of menopause with menopause-specific quality of life (MSQOL) and sleep parameters to help predict treatment outcomes and inform treatment decision-making. Data were derived from a 12-week randomized, double-blind, placebo-controlled phase 3 trial that evaluated effects of two doses of conjugated estrogens/bazedoxifene on VMS in nonhysterectomized postmenopausal women (N = 318, mean age = 53.39) experiencing at least seven moderate to severe hot flushes (HFs) per day or at least 50 per week. Repeated measures models were used to determine relationships between HF frequency and severity and outcomes on the Menopause-Specific Quality of Life questionnaire and the Medical Outcomes Study sleep scale. Sensitivity analyses were performed to check assumptions of linearity between VMS and outcomes. Frequency and severity of HFs showed approximately linear relationships with MSQOL and sleep parameters. Sensitivity analyses supported assumptions of linearity. The largest changes associated with a reduction of five HFs and a 0.5-point decrease in severity occurred in the Menopause-Specific Quality of Life vasomotor functioning domain (0.78 for number of HFs and 0.98 for severity) and the Medical Outcomes Study sleep disturbance (7.38 and 4.86) and sleep adequacy (-5.60 and -4.66) domains and the two overall sleep problems indices (SPI: 5.17 and 3.63; SPII: 5.82 and 3.83). Frequency and severity of HFs have an approximately linear relationship with MSQOL and sleep parameters-that is, improvements in HFs are associated with improvements in MSQOL and sleep. Such relationships may enable clinicians to predict changes in sleep and MSQOL expected from various VMS treatments.

  11. LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL

    NASA Technical Reports Server (NTRS)

    Duke, E. L.

    1994-01-01

    The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.

  12. Comparing Dynamic Treatment Regimes Using Repeated-Measures Outcomes: Modeling Considerations in SMART Studies

    PubMed Central

    Lu, Xi; Nahum-Shani, Inbal; Kasari, Connie; Lynch, Kevin G.; Oslin, David W.; Pelham, William E.; Fabiano, Gregory; Almirall, Daniel

    2016-01-01

    A dynamic treatment regime (DTR) is a sequence of decision rules, each of which recommends a treatment based on a patient’s past and current health status. Sequential, multiple assignment, randomized trials (SMARTs) are multi-stage trial designs that yield data specifically for building effective DTRs. Modeling the marginal mean trajectories of a repeated-measures outcome arising from a SMART presents challenges, because traditional longitudinal models used for randomized clinical trials do not take into account the unique design features of SMART. We discuss modeling considerations for various forms of SMART designs, emphasizing the importance of considering the timing of repeated measures in relation to the treatment stages in a SMART. For illustration, we use data from three SMART case studies with increasing level of complexity, in autism, child attention deficit hyperactivity disorder (ADHD), and adult alcoholism. In all three SMARTs we illustrate how to accommodate the design features along with the timing of the repeated measures when comparing DTRs based on mean trajectories of the repeated-measures outcome. PMID:26638988

  13. Comparing dynamic treatment regimes using repeated-measures outcomes: modeling considerations in SMART studies.

    PubMed

    Lu, Xi; Nahum-Shani, Inbal; Kasari, Connie; Lynch, Kevin G; Oslin, David W; Pelham, William E; Fabiano, Gregory; Almirall, Daniel

    2016-05-10

    A dynamic treatment regime (DTR) is a sequence of decision rules, each of which recommends a treatment based on a patient's past and current health status. Sequential, multiple assignment, randomized trials (SMARTs) are multi-stage trial designs that yield data specifically for building effective DTRs. Modeling the marginal mean trajectories of a repeated-measures outcome arising from a SMART presents challenges, because traditional longitudinal models used for randomized clinical trials do not take into account the unique design features of SMART. We discuss modeling considerations for various forms of SMART designs, emphasizing the importance of considering the timing of repeated measures in relation to the treatment stages in a SMART. For illustration, we use data from three SMART case studies with increasing level of complexity, in autism, child attention deficit hyperactivity disorder, and adult alcoholism. In all three SMARTs, we illustrate how to accommodate the design features along with the timing of the repeated measures when comparing DTRs based on mean trajectories of the repeated-measures outcome. Copyright © 2015 John Wiley & Sons, Ltd.

  14. New protocol for αAstree electronic tongue enabling full performance qualification according to ICH Q2.

    PubMed

    Pein, Miriam; Eckert, Carolin; Preis, Maren; Breitkreutz, Jörg

    2013-09-01

    Performance qualification (PQ) of taste sensing systems is mandatory for their use in pharmaceutical industry. According to ICH Q2 (R1) and a recent adaptation for taste sensing systems, non-specificity, log-linear relationships between the concentration of analytes and the sensor signal as well as a repeatability with relative standard deviation (RSD) values <4% were defined as basic requirements to pass a PQ. In the present work, the αAstree taste sensing system led to a successful PQ procedure by the use of recent sensor batches for pharmaceutical applications (sensor set #2) and a modified measurement protocol. Log-linear relationships between concentration and responses of each sensor were investigated for different bitter tasting active pharmaceutical ingredients (APIs). Using the new protocol, RSD values <2.1% were obtained in the repeatability study. Applying the visual evaluation approach, detection and quantitation limit could be determined for caffeine citrate with every sensor (LOD 0.05-0.5 mM, LOQ: 0.1-0.5 mM). In addition, the sensor set marketed for food applications (sensor set #5) was proven to show beneficial effects regarding the log-linear relationship between the concentration of quinine hydrochloride and the sensor signal. By the use of our proposed protocol, it is possible to implement the αAstree taste sensing system as a tool to assure quality control in the pharmaceutical industry. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Short communication: a repeated simian human immunodeficiency virus reverse transcriptase/herpes simplex virus type 2 cochallenge macaque model for the evaluation of microbicides.

    PubMed

    Kenney, Jessica; Derby, Nina; Aravantinou, Meropi; Kleinbeck, Kyle; Frank, Ines; Gettie, Agegnehu; Grasperge, Brooke; Blanchard, James; Piatak, Michael; Lifson, Jeffrey D; Zydowsky, Thomas M; Robbiani, Melissa

    2014-11-01

    Epidemiological studies suggest that prevalent herpes simplex virus type 2 (HSV-2) infection increases the risk of HIV acquisition, underscoring the need to develop coinfection models to evaluate promising prevention strategies. We previously established a single high-dose vaginal coinfection model of simian human immunodeficiency virus (SHIV)/HSV-2 in Depo-Provera (DP)-treated macaques. However, this model does not appropriately mimic women's exposure. Repeated limiting dose SHIV challenge models are now used routinely to test prevention strategies, yet, at present, there are no reports of a repeated limiting dose cochallenge model in which to evaluate products targeting HIV and HSV-2. Herein, we show that 20 weekly cochallenges with 2-50 TCID50 simian human immunodeficiency virus reverse transcriptase (SHIV-RT) and 10(7) pfu HSV-2 results in infection with both viruses (4/6 SHIV-RT, 6/6 HSV-2). The frequency and level of vaginal HSV-2 shedding were significantly greater in the repeated exposure model compared to the single high-dose model (p<0.0001). We used this new model to test the Council's on-demand microbicide gel, MZC, which is active against SHIV-RT in DP-treated macaques and HSV-2 and human papillomavirus (HPV) in mice. While MZC reduced SHIV and HSV-2 infections in our repeated limiting dose model when cochallenging 8 h after each gel application, a barrier effect of carrageenan (CG) that was not seen in DP-treated animals precluded evaluation of the significance of the antiviral activity of MZC. Both MZC and CG significantly (p<0.0001) reduced the frequency and level of vaginal HSV-2 shedding compared to no gel treatment. This validates the use of this repeated limiting dose cochallenge model for testing products targeting HIV and HSV-2.

  16. Extending substructure based iterative solvers to multiple load and repeated analyses

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel

    1993-01-01

    Direct solvers currently dominate commercial finite element structural software, but do not scale well in the fine granularity regime targeted by emerging parallel processors. Substructure based iterative solvers--often called also domain decomposition algorithms--lend themselves better to parallel processing, but must overcome several obstacles before earning their place in general purpose structural analysis programs. One such obstacle is the solution of systems with many or repeated right hand sides. Such systems arise, for example, in multiple load static analyses and in implicit linear dynamics computations. Direct solvers are well-suited for these problems because after the system matrix has been factored, the multiple or repeated solutions can be obtained through relatively inexpensive forward and backward substitutions. On the other hand, iterative solvers in general are ill-suited for these problems because they often must restart from scratch for every different right hand side. In this paper, we present a methodology for extending the range of applications of domain decomposition methods to problems with multiple or repeated right hand sides. Basically, we formulate the overall problem as a series of minimization problems over K-orthogonal and supplementary subspaces, and tailor the preconditioned conjugate gradient algorithm to solve them efficiently. The resulting solution method is scalable, whereas direct factorization schemes and forward and backward substitution algorithms are not. We illustrate the proposed methodology with the solution of static and dynamic structural problems, and highlight its potential to outperform forward and backward substitutions on parallel computers. As an example, we show that for a linear structural dynamics problem with 11640 degrees of freedom, every time-step beyond time-step 15 is solved in a single iteration and consumes 1.0 second on a 32 processor iPSC-860 system; for the same problem and the same parallel processor, a pair of forward/backward substitutions at each step consumes 15.0 seconds.

  17. Effect of repeated compaction of tablets on tablet properties and work of compaction using an instrumented laboratory tablet press.

    PubMed

    Gamlen, Michael John Desmond; Martini, Luigi G; Al Obaidy, Kais G

    2015-01-01

    The repeated compaction of Avicel PH101, dicalcium phosphate dihydrate (DCP) powder, 50:50 DCP/Avicel PH101 and Starch 1500 was studied using an instrumented laboratory tablet press which measures upper punch force, punch displacement and ejection force and operates using a V-shaped compression profile. The measurement of work compaction was demonstrated, and the test materials were ranked in order of compaction behaviour Avicel PH101 > DCP/Avicel PH101 > Starch > DCP. The behaviour of the DCP/Avicel PH101 mixture was distinctly non-linear compared with the pure components. Repeated compaction and precompression had no effect on the tensile fracture strength of Avicel PH101 tablets, although small effects on friability and disintegration time were seen. Repeated compaction and precompression reduced the tensile strength and the increased disintegration time of the DCP tablets, but improved the strength and friability of Starch 1500 tablets. Based on the data reported, routine laboratory measurement of tablet work of compaction may have potential as a critical quality attribute of a powder blend for compression. The instrumented press was suitable for student use with minimal supervisor input.

  18. Unified continuum damage model for matrix cracking in composite rotor blades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pollayi, Hemaraju; Harursampath, Dineshkumar

    This paper deals with modeling of the first damage mode, matrix micro-cracking, in helicopter rotor/wind turbine blades and how this effects the overall cross-sectional stiffness. The helicopter/wind turbine rotor system operates in a highly dynamic and unsteady environment leading to severe vibratory loads present in the system. Repeated exposure to this loading condition can induce damage in the composite rotor blades. These rotor/turbine blades are generally made of fiber-reinforced laminated composites and exhibit various competing modes of damage such as matrix micro-cracking, delamination, and fiber breakage. There is a need to study the behavior of the composite rotor system undermore » various key damage modes in composite materials for developing Structural Health Monitoring (SHM) system. Each blade is modeled as a beam based on geometrically non-linear 3-D elasticity theory. Each blade thus splits into 2-D analyzes of cross-sections and non-linear 1-D analyzes along the beam reference curves. Two different tools are used here for complete 3-D analysis: VABS for 2-D cross-sectional analysis and GEBT for 1-D beam analysis. The physically-based failure models for matrix in compression and tension loading are used in the present work. Matrix cracking is detected using two failure criterion: Matrix Failure in Compression and Matrix Failure in Tension which are based on the recovered field. A strain variable is set which drives the damage variable for matrix cracking and this damage variable is used to estimate the reduced cross-sectional stiffness. The matrix micro-cracking is performed in two different approaches: (i) Element-wise, and (ii) Node-wise. The procedure presented in this paper is implemented in VABS as matrix micro-cracking modeling module. Three examples are presented to investigate the matrix failure model which illustrate the effect of matrix cracking on cross-sectional stiffness by varying the applied cyclic load.« less

  19. Photoacoustic thermal flowmetry with a single light source

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Lan, Bangxin; Hu, Leo; Chen, Ruimin; Zhou, Qifa; Yao, Junjie

    2017-09-01

    We report a photoacoustic thermal flowmetry based on optical-resolution photoacoustic microscopy (OR-PAM) using a single laser source for both thermal tagging and photoacoustic excitation. When an optically absorbing medium is flowing across the optical focal zone of OR-PAM, a small volume of the medium within the optical focus is repeatedly illuminated and heated by a train of laser pulses with a high repetition rate. The average temperature of the heated volume at each laser pulse is indicated by the photoacoustic signal excited by the same laser pulse due to the well-established linear relationship between the Grueneisen coefficient and the local temperature. The thermal dynamics of the heated medium volume, which are closely related to the flow speed, can therefore be measured from the time course of the detected photoacoustic signals. Here, we have developed a lumped mathematical model to describe the time course of the photoacoustic signals as a function of the medium's flow speed. We conclude that the rising time constant of the photoacoustic signals is linearly dependent on the flow speed. Thus, the flow speed can be quantified by fitting the measured photoacoustic signals using the derived mathematical model. We first performed proof-of-concept experiments using defibrinated bovine blood flowing in a plastic tube. The experiment results have demonstrated that the proposed method has high accuracy (˜±6%) and a wide range of measurable flow speeds. We further validated the method by measuring the blood flow speeds of the microvasculature in a mouse ear in vivo.

  20. Exploring the clinical course of neck pain in physical therapy: a longitudinal study.

    PubMed

    Walton, David M; Eilon-Avigdor, Yaara; Wonderham, Michael; Wilk, Piotr

    2014-02-01

    To investigate the short-term trajectory of recovery from mechanical neck pain, and predictors of trajectory. Prospective, longitudinal cohort study with 5 repeated measurements over 4 weeks. Community-based physical therapy clinics. Convenience sample of community-dwelling adults (N=50) with uncomplicated mechanical neck disorders of any duration. Usual physical therapy care. Neck Disability Index (NDI), numeric rating scale (NRS) of pain intensity. A total of 50 consecutive subjects provided 5 data points over 4 weeks. Exploratory modeling using latent class growth analysis revealed a linear trend in improvement, at a mean of 1.5 NDI points and 0.5 NRS points per week. Within the NDI trajectory, 3 latent classes were identified, each with a unique trend: worsening (14.5%), rapid improvement (19.6%), and slow improvement (65.8%). Within the NRS trajectory, 2 unique trends were identified: stable (48.0%) and improving (52.0%). Predictors of trajectory class suggest that it may be possible to predict the trajectory. Results are described in view of the sample size. The mean trajectory of improvement in neck pain adequately fits a linear model and suggests slow but stable improvement over the short term. However, up to 3 different trajectories have been identified that suggest neck pain, and recovery thereof, is not homogenous. This may hold value for the design of clinical trials. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

Top