Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A
2017-02-01
This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r = 0.71-0.88, RMSE: 1.11-1.61 METs; p > 0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r = 0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r = 0.88, RMSE: 1.10-1.11 METs; p > 0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r = 0.88, RMSE: 1.12 METs. Linear models-correlations: r = 0.86, RMSE: 1.18-1.19 METs; p < 0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r = 0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r = 0.71-0.73, RMSE: 1.55-1.61 METs; p < 0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh-worn accelerometers and may be viable alternative modeling techniques for EE prediction for hip- or thigh-worn accelerometers.
Non-Gaussian lineshapes and dynamics of time-resolved linear and nonlinear (correlation) spectra.
Dinpajooh, Mohammadhasan; Matyushov, Dmitry V
2014-07-17
Signatures of nonlinear and non-Gaussian dynamics in time-resolved linear and nonlinear (correlation) 2D spectra are analyzed in a model considering a linear plus quadratic dependence of the spectroscopic transition frequency on a Gaussian nuclear coordinate of the thermal bath (quadratic coupling). This new model is contrasted to the commonly assumed linear dependence of the transition frequency on the medium nuclear coordinates (linear coupling). The linear coupling model predicts equality between the Stokes shift and equilibrium correlation functions of the transition frequency and time-independent spectral width. Both predictions are often violated, and we are asking here the question of whether a nonlinear solvent response and/or non-Gaussian dynamics are required to explain these observations. We find that correlation functions of spectroscopic observables calculated in the quadratic coupling model depend on the chromophore's electronic state and the spectral width gains time dependence, all in violation of the predictions of the linear coupling models. Lineshape functions of 2D spectra are derived assuming Ornstein-Uhlenbeck dynamics of the bath nuclear modes. The model predicts asymmetry of 2D correlation plots and bending of the center line. The latter is often used to extract two-point correlation functions from 2D spectra. The dynamics of the transition frequency are non-Gaussian. However, the effect of non-Gaussian dynamics is limited to the third-order (skewness) time correlation function, without affecting the time correlation functions of higher order. The theory is tested against molecular dynamics simulations of a model polar-polarizable chromophore dissolved in a force field water.
Genomic prediction based on data from three layer lines using non-linear regression models.
Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L
2014-11-06
Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.
NASA Astrophysics Data System (ADS)
Fisher, Karl B.
1995-08-01
The relation between the galaxy correlation functions in real-space and redshift-space is derived in the linear regime by an appropriate averaging of the joint probability distribution of density and velocity. The derivation recovers the familiar linear theory result on large scales but has the advantage of clearly revealing the dependence of the redshift distortions on the underlying peculiar velocity field; streaming motions give rise to distortions of θ(Ω0.6/b) while variations in the anisotropic velocity dispersion yield terms of order θ(Ω1.2/b2). This probabilistic derivation of the redshift-space correlation function is similar in spirit to the derivation of the commonly used "streaming" model, in which the distortions are given by a convolution of the real-space correlation function with a velocity distribution function. The streaming model is often used to model the redshift-space correlation function on small, highly nonlinear, scales. There have been claims in the literature, however, that the streaming model is not valid in the linear regime. Our analysis confirms this claim, but we show that the streaming model can be made consistent with linear theory provided that the model for the streaming has the functional form predicted by linear theory and that the velocity distribution is chosen to be a Gaussian with the correct linear theory dispersion.
An Expert System for the Evaluation of Cost Models
1990-09-01
contrast to the condition of equal error variance, called homoscedasticity. (Reference: Applied Linear Regression Models by John Neter - page 423...normal. (Reference: Applied Linear Regression Models by John Neter - page 125) Click Here to continue -> Autocorrelation Click Here for the index - Index...over time. Error terms correlated over time are said to be autocorrelated or serially correlated. (REFERENCE: Applied Linear Regression Models by John
Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel
2015-09-10
Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suhara, Tadahiro; Kanada-En'yo, Yoshiko
We investigate the linear-chain structures in highly excited states of {sup 14}C using a generalized molecular-orbital model, by which we incorporate an asymmetric configuration of three {alpha} clusters in the linear-chain states. By applying this model to the {sup 14}C system, we study the {sup 10}Be+{alpha} correlation in the linear-chain state of {sup 14}C. To clarify the origin of the {sup 10}Be+{alpha} correlation in the {sup 14}C linear-chain state, we analyze linear 3 {alpha} and 3{alpha} + n systems in a similar way. We find that a linear 3{alpha} system prefers the asymmetric 2{alpha} + {alpha} configuration, whose origin ismore » the many-body correlation incorporated by the parity projection. This configuration causes an asymmetric mean field for two valence neutrons, which induces the concentration of valence neutron wave functions around the correlating 2{alpha}. A linear-chain structure of {sup 16}C is also discussed.« less
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
Abdelnour, Farras; Voss, Henning U.; Raj, Ashish
2014-01-01
The relationship between anatomic connectivity of large-scale brain networks and their functional connectivity is of immense importance and an area of active research. Previous attempts have required complex simulations which model the dynamics of each cortical region, and explore the coupling between regions as derived by anatomic connections. While much insight is gained from these non-linear simulations, they can be computationally taxing tools for predicting functional from anatomic connectivities. Little attention has been paid to linear models. Here we show that a properly designed linear model appears to be superior to previous non-linear approaches in capturing the brain’s long-range second order correlation structure that governs the relationship between anatomic and functional connectivities. We derive a linear network of brain dynamics based on graph diffusion, whereby the diffusing quantity undergoes a random walk on a graph. We test our model using subjects who underwent diffusion MRI and resting state fMRI. The network diffusion model applied to the structural networks largely predicts the correlation structures derived from their fMRI data, to a greater extent than other approaches. The utility of the proposed approach is that it can routinely be used to infer functional correlation from anatomic connectivity. And since it is linear, anatomic connectivity can also be inferred from functional data. The success of our model confirms the linearity of ensemble average signals in the brain, and implies that their long-range correlation structure may percolate within the brain via purely mechanistic processes enacted on its structural connectivity pathways. PMID:24384152
Genetic parameters for racing records in trotters using linear and generalized linear models.
Suontama, M; van der Werf, J H J; Juga, J; Ojala, M
2012-09-01
Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.
Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N
2014-12-01
Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice. © 2014 Blackwell Verlag GmbH.
Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu
2015-01-01
A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.
NASA Technical Reports Server (NTRS)
Johnson, R. A.; Wehrly, T.
1976-01-01
Population models for dependence between two angular measurements and for dependence between an angular and a linear observation are proposed. The method of canonical correlations first leads to new population and sample measures of dependence in this latter situation. An example relating wind direction to the level of a pollutant is given. Next, applied to pairs of angular measurements, the method yields previously proposed sample measures in some special cases and a new sample measure in general.
NASA Astrophysics Data System (ADS)
Fang, Wei; Huang, Shengzhi; Huang, Qiang; Huang, Guohe; Meng, Erhao; Luan, Jinkai
2018-06-01
In this study, reference evapotranspiration (ET0) forecasting models are developed for the least economically developed regions subject to meteorological data scarcity. Firstly, the partial mutual information (PMI) capable of capturing the linear and nonlinear dependence is investigated regarding its utility to identify relevant predictors and exclude those that are redundant through the comparison with partial linear correlation. An efficient input selection technique is crucial for decreasing model data requirements. Then, the interconnection between global climate indices and regional ET0 is identified. Relevant climatic indices are introduced as additional predictors to comprise information regarding ET0, which ought to be provided by meteorological data unavailable. The case study in the Jing River and Beiluo River basins, China, reveals that PMI outperforms the partial linear correlation in excluding the redundant information, favouring the yield of smaller predictor sets. The teleconnection analysis identifies the correlation between Nino 1 + 2 and regional ET0, indicating influences of ENSO events on the evapotranspiration process in the study area. Furthermore, introducing Nino 1 + 2 as predictors helps to yield more accurate ET0 forecasts. A model performance comparison also shows that non-linear stochastic models (SVR or RF with input selection through PMI) do not always outperform linear models (MLR with inputs screen by linear correlation). However, the former can offer quite comparable performance depending on smaller predictor sets. Therefore, efforts such as screening model inputs through PMI and incorporating global climatic indices interconnected with ET0 can benefit the development of ET0 forecasting models suitable for data-scarce regions.
Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.
Trninić, Marko; Jeličić, Mario; Papić, Vladan
2015-07-01
In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.
Correlators in tensor models from character calculus
NASA Astrophysics Data System (ADS)
Mironov, A.; Morozov, A.
2017-11-01
We explain how the calculations of [20], which provided the first evidence for non-trivial structures of Gaussian correlators in tensor models, are efficiently performed with the help of the (Hurwitz) character calculus. This emphasizes a close similarity between technical methods in matrix and tensor models and supports a hope to understand the emerging structures in very similar terms. We claim that the 2m-fold Gaussian correlators of rank r tensors are given by r-linear combinations of dimensions with the Young diagrams of size m. The coefficients are made from the characters of the symmetric group Sm and their exact form depends on the choice of the correlator and on the symmetries of the model. As the simplest application of this new knowledge, we provide simple expressions for correlators in the Aristotelian tensor model as tri-linear combinations of dimensions.
A geometric approach to non-linear correlations with intrinsic scatter
NASA Astrophysics Data System (ADS)
Pihajoki, Pauli
2017-12-01
We propose a new mathematical model for n - k-dimensional non-linear correlations with intrinsic scatter in n-dimensional data. The model is based on Riemannian geometry and is naturally symmetric with respect to the measured variables and invariant under coordinate transformations. We combine the model with a Bayesian approach for estimating the parameters of the correlation relation and the intrinsic scatter. A side benefit of the approach is that censored and truncated data sets and independent, arbitrary measurement errors can be incorporated. We also derive analytic likelihoods for the typical astrophysical use case of linear relations in n-dimensional Euclidean space. We pay particular attention to the case of linear regression in two dimensions and compare our results to existing methods. Finally, we apply our methodology to the well-known MBH-σ correlation between the mass of a supermassive black hole in the centre of a galactic bulge and the corresponding bulge velocity dispersion. The main result of our analysis is that the most likely slope of this correlation is ∼6 for the data sets used, rather than the values in the range of ∼4-5 typically quoted in the literature for these data.
Dynamics of electricity market correlations
NASA Astrophysics Data System (ADS)
Alvarez-Ramirez, J.; Escarela-Perez, R.; Espinosa-Perez, G.; Urrea, R.
2009-06-01
Electricity market participants rely on demand and price forecasts to decide their bidding strategies, allocate assets, negotiate bilateral contracts, hedge risks, and plan facility investments. However, forecasting is hampered by the non-linear and stochastic nature of price time series. Diverse modeling strategies, from neural networks to traditional transfer functions, have been explored. These approaches are based on the assumption that price series contain correlations that can be exploited for model-based prediction purposes. While many works have been devoted to the demand and price modeling, a limited number of reports on the nature and dynamics of electricity market correlations are available. This paper uses detrended fluctuation analysis to study correlations in the demand and price time series and takes the Australian market as a case study. The results show the existence of correlations in both demand and prices over three orders of magnitude in time ranging from hours to months. However, the Hurst exponent is not constant over time, and its time evolution was computed over a subsample moving window of 250 observations. The computations, also made for two Canadian markets, show that the correlations present important fluctuations over a seasonal one-year cycle. Interestingly, non-linearities (measured in terms of a multifractality index) and reduced price predictability are found for the June-July periods, while the converse behavior is displayed during the December-January period. In terms of forecasting models, our results suggest that non-linear recursive models should be considered for accurate day-ahead price estimation. On the other hand, linear models seem to suffice for demand forecasting purposes.
Comparison of kinetic model for biogas production from corn cob
NASA Astrophysics Data System (ADS)
Shitophyta, L. M.; Maryudi
2018-04-01
Energy demand increases every day, while the energy source especially fossil energy depletes increasingly. One of the solutions to overcome the energy depletion is to provide renewable energies such as biogas. Biogas can be generated by corn cob and food waste. In this study, biogas production was carried out by solid-state anaerobic digestion. The steps of biogas production were the preparation of feedstock, the solid-state anaerobic digestion, and the measurement of biogas volume. This study was conducted on TS content of 20%, 22%, and 24%. The aim of this research was to compare kinetic models of biogas production from corn cob and food waste as a co-digestion using the linear, exponential equation, and first-kinetic models. The result showed that the exponential equation had a better correlation than the linear equation on the ascending graph of biogas production. On the contrary, the linear equation had a better correlation than the exponential equation on the descending graph of biogas production. The correlation values on the first-kinetic model had the smallest value compared to the linear and exponential models.
NASA Astrophysics Data System (ADS)
Made Tirta, I.; Anggraeni, Dian
2018-04-01
Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
A Bayes linear Bayes method for estimation of correlated event rates.
Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim
2013-12-01
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.
Study on power grid characteristics in summer based on Linear regression analysis
NASA Astrophysics Data System (ADS)
Tang, Jin-hui; Liu, You-fei; Liu, Juan; Liu, Qiang; Liu, Zhuan; Xu, Xi
2018-05-01
The correlation analysis of power load and temperature is the precondition and foundation for accurate load prediction, and a great deal of research has been made. This paper constructed the linear correlation model between temperature and power load, then the correlation of fault maintenance work orders with the power load is researched. Data details of Jiangxi province in 2017 summer such as temperature, power load, fault maintenance work orders were adopted in this paper to develop data analysis and mining. Linear regression models established in this paper will promote electricity load growth forecast, fault repair work order review, distribution network operation weakness analysis and other work to further deepen the refinement.
NASA Astrophysics Data System (ADS)
Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki
2002-05-01
To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.
Quadratic correlation filters for optical correlators
NASA Astrophysics Data System (ADS)
Mahalanobis, Abhijit; Muise, Robert R.; Vijaya Kumar, Bhagavatula V. K.
2003-08-01
Linear correlation filters have been implemented in optical correlators and successfully used for a variety of applications. The output of an optical correlator is usually sensed using a square law device (such as a CCD array) which forces the output to be the squared magnitude of the desired correlation. It is however not a traditional practice to factor the effect of the square-law detector in the design of the linear correlation filters. In fact, the input-output relationship of an optical correlator is more accurately modeled as a quadratic operation than a linear operation. Quadratic correlation filters (QCFs) operate directly on the image data without the need for feature extraction or segmentation. In this sense, the QCFs retain the main advantages of conventional linear correlation filters while offering significant improvements in other respects. Not only is more processing required to detect peaks in the outputs of multiple linear filters, but choosing a winner among them is an error prone task. In contrast, all channels in a QCF work together to optimize the same performance metric and produce a combined output that leads to considerable simplification of the post-processing. In this paper, we propose a novel approach to the design of quadratic correlation based on the Fukunaga Koontz transform. Although quadratic filters are known to be optimum when the data is Gaussian, it is expected that they will perform as well as or better than linear filters in general. Preliminary performance results are provided that show that quadratic correlation filters perform better than their linear counterparts.
Jaffe, B.E.; Rubin, D.M.
1996-01-01
The time-dependent response of sediment suspension to flow velocity was explored by modeling field measurements collected in the surf zone during a large storm. Linear and nonlinear models were created and tested using flow velocity as input and suspended-sediment concentration as output. A sequence of past velocities (velocity history), as well as velocity from the same instant as the suspended-sediment concentration, was used as input; this velocity history length was allowed to vary. The models also allowed for a lag between input (instantaneous velocity or end of velocity sequence) and output (suspended-sediment concentration). Predictions of concentration from instantaneous velocity or instantaneous velocity raised to a power (up to 8) using linear models were poor (correlation coefficients between predicted and observed concentrations were less than 0.10). Allowing a lag between velocity and concentration improved linear models (correlation coefficient of 0.30), with optimum lag time increasing with elevation above the seabed (from 1.5 s at 13 cm to 8.5 s at 60 cm). These lags are largely due to the time for an observed flow event to effect the bed and mix sediment upward. Using a velocity history further improved linear models (correlation coefficient of 0.43). The best linear model used 12.5 s of velocity history (approximately one wave period) to predict concentration. Nonlinear models gave better predictions than linear models, and, as with linear models, nonlinear models using a velocity history performed better than models using only instantaneous velocity as input. Including a lag time between the velocity and concentration also improved the predictions. The best model (correlation coefficient of 0.58) used 3 s (approximately a quarter wave period) of the cross-shore velocity squared, starting at 4.5 s before the observed concentration, to predict concentration. Using a velocity history increases the performance of the models by specifying a more complete description of the dynamical forcing of the flow (including accelerations and wave phase and shape) responsible for sediment suspension. Incorporating such a velocity history and a lag time into the formulation of the forcing for time-dependent models for sediment suspension in the surf zone will greatly increase our ability to predict suspended-sediment transport.
Handling Correlations between Covariates and Random Slopes in Multilevel Models
ERIC Educational Resources Information Center
Bates, Michael David; Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders
2014-01-01
This article discusses estimation of multilevel/hierarchical linear models that include cluster-level random intercepts and random slopes. Viewing the models as structural, the random intercepts and slopes represent the effects of omitted cluster-level covariates that may be correlated with included covariates. The resulting correlations between…
Forecasting currency circulation data of Bank Indonesia by using hybrid ARIMAX-ANN model
NASA Astrophysics Data System (ADS)
Prayoga, I. Gede Surya Adi; Suhartono, Rahayu, Santi Puteri
2017-05-01
The purpose of this study is to forecast currency inflow and outflow data of Bank Indonesia. Currency circulation in Indonesia is highly influenced by the presence of Eid al-Fitr. One way to forecast the data with Eid al-Fitr effect is using autoregressive integrated moving average with exogenous input (ARIMAX) model. However, ARIMAX is a linear model, which cannot handle nonlinear correlation structures of the data. In the field of forecasting, inaccurate predictions can be considered caused by the existence of nonlinear components that are uncaptured by the model. In this paper, we propose a hybrid model of ARIMAX and artificial neural networks (ANN) that can handle both linear and nonlinear correlation. This method was applied for 46 series of currency inflow and 46 series of currency outflow. The results showed that based on out-of-sample root mean squared error (RMSE), the hybrid models are up to10.26 and 10.65 percent better than ARIMAX for inflow and outflow series, respectively. It means that ANN performs well in modeling nonlinear correlation of the data and can increase the accuracy of linear model.
Solar granulation and statistical crystallography: A modeling approach using size-shape relations
NASA Technical Reports Server (NTRS)
Noever, D. A.
1994-01-01
The irregular polygonal pattern of solar granulation is analyzed for size-shape relations using statistical crystallography. In contrast to previous work which has assumed perfectly hexagonal patterns for granulation, more realistic accounting of cell (granule) shapes reveals a broader basis for quantitative analysis. Several features emerge as noteworthy: (1) a linear correlation between number of cell-sides and neighboring shapes (called Aboav-Weaire's law); (2) a linear correlation between both average cell area and perimeter and the number of cell-sides (called Lewis's law and a perimeter law, respectively) and (3) a linear correlation between cell area and squared perimeter (called convolution index). This statistical picture of granulation is consistent with a finding of no correlation in cell shapes beyond nearest neighbors. A comparative calculation between existing model predictions taken from luminosity data and the present analysis shows substantial agreements for cell-size distributions. A model for understanding grain lifetimes is proposed which links convective times to cell shape using crystallographic results.
Wang, Kun; Jiang, Tianzi; Liang, Meng; Wang, Liang; Tian, Lixia; Zhang, Xinqing; Li, Kuncheng; Liu, Zhening
2006-01-01
In this work, we proposed a discriminative model of Alzheimer's disease (AD) on the basis of multivariate pattern classification and functional magnetic resonance imaging (fMRI). This model used the correlation/anti-correlation coefficients of two intrinsically anti-correlated networks in resting brains, which have been suggested by two recent studies, as the feature of classification. Pseudo-Fisher Linear Discriminative Analysis (pFLDA) was then performed on the feature space and a linear classifier was generated. Using leave-one-out (LOO) cross validation, our results showed a correct classification rate of 83%. We also compared the proposed model with another one based on the whole brain functional connectivity. Our proposed model outperformed the other one significantly, and this implied that the two intrinsically anti-correlated networks may be a more susceptible part of the whole brain network in the early stage of AD.
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
Peñagaricano, F; Urioste, J I; Naya, H; de los Campos, G; Gianola, D
2011-04-01
Black skin spots are associated with pigmented fibres in wool, an important quality fault. Our objective was to assess alternative models for genetic analysis of presence (BINBS) and number (NUMBS) of black spots in Corriedale sheep. During 2002-08, 5624 records from 2839 animals in two flocks, aged 1 through 6 years, were taken at shearing. Four models were considered: linear and probit for BINBS and linear and Poisson for NUMBS. All models included flock-year and age as fixed effects and animal and permanent environmental as random effects. Models were fitted to the whole data set and were also compared based on their predictive ability in cross-validation. Estimates of heritability ranged from 0.154 to 0.230 for BINBS and 0.269 to 0.474 for NUMBS. For BINBS, the probit model fitted slightly better to the data than the linear model. Predictions of random effects from these models were highly correlated, and both models exhibited similar predictive ability. For NUMBS, the Poisson model, with a residual term to account for overdispersion, performed better than the linear model in goodness of fit and predictive ability. Predictions of random effects from the Poisson model were more strongly correlated with those from BINBS models than those from the linear model. Overall, the use of probit or linear models for BINBS and of a Poisson model with a residual for NUMBS seems a reasonable choice for genetic selection purposes in Corriedale sheep. © 2010 Blackwell Verlag GmbH.
Nguyen, N H; Whatmore, P; Miller, A; Knibb, W
2016-02-01
The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.
Killiches, Matthias; Czado, Claudia
2018-03-22
We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Joint statistics of strongly correlated neurons via dimensionality reduction
NASA Astrophysics Data System (ADS)
Deniz, Taşkın; Rotter, Stefan
2017-06-01
The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.
NASA Astrophysics Data System (ADS)
Krysa, Zbigniew; Pactwa, Katarzyna; Wozniak, Justyna; Dudek, Michal
2017-12-01
Geological variability is one of the main factors that has an influence on the viability of mining investment projects and on the technical risk of geology projects. In the current scenario, analyses of economic viability of new extraction fields have been performed for the KGHM Polska Miedź S.A. underground copper mine at Fore Sudetic Monocline with the assumption of constant averaged content of useful elements. Research presented in this article is aimed at verifying the value of production from copper and silver ore for the same economic background with the use of variable cash flows resulting from the local variability of useful elements. Furthermore, the ore economic model is investigated for a significant difference in model value estimated with the use of linear correlation between useful elements content and the height of mine face, and the approach in which model parameters correlation is based upon the copula best matched information capacity criterion. The use of copula allows the simulation to take into account the multi variable dependencies at the same time, thereby giving a better reflection of the dependency structure, which linear correlation does not take into account. Calculation results of the economic model used for deposit value estimation indicate that the correlation between copper and silver estimated with the use of copula generates higher variation of possible project value, as compared to modelling correlation based upon linear correlation. Average deposit value remains unchanged.
[The nonlinear parameters of interference EMG of two day old human newborns].
Voroshilov, A S; Meĭgal, A Iu
2011-01-01
Temporal structure of interference electromyogram (iEMG) was studied in healthy two days old human newborns (n = 76) using the non-linear parameters (correlation dimension, fractal dimension, correlation entropy). It has been found that the non-linear parameters of iEMG were time-dependent because they were decreasing within the first two days of life. Also, these parameters were sensitive to muscle function, because correlation dimension, fractal dimension, and correlation entropy of iEMG in gastrocnemius muscle differed from the other muscles. The non-linear parameters were proven to be independent of the iEMG amplitude. That model of early ontogenesis may be of potential use for investigation of anti-gravitation activity.
Demonstration of the Web-based Interspecies Correlation Estimation (Web-ICE) modeling application
The Web-based Interspecies Correlation Estimation (Web-ICE) modeling application is available to the risk assessment community through a user-friendly internet platform (http://epa.gov/ceampubl/fchain/webice/). ICE models are log-linear least square regressions that predict acute...
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.
Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-04-01
To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.
Effect of correlation on covariate selection in linear and nonlinear mixed effect models.
Bonate, Peter L
2017-01-01
The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Forutan, M; Ansari Mahyari, S; Sargolzaei, M
2015-02-01
Calf and heifer survival are important traits in dairy cattle affecting profitability. This study was carried out to estimate genetic parameters of survival traits in female calves at different age periods, until nearly the first calving. Records of 49,583 female calves born during 1998 and 2009 were considered in five age periods as days 1-30, 31-180, 181-365, 366-760 and full period (day 1-760). Genetic components were estimated based on linear and threshold sire models and linear animal models. The models included both fixed effects (month of birth, dam's parity number, calving ease and twin/single) and random effects (herd-year, genetic effect of sire or animal and residual). Rates of death were 2.21, 3.37, 1.97, 4.14 and 12.4% for the above periods, respectively. Heritability estimates were very low ranging from 0.48 to 3.04, 0.62 to 3.51 and 0.50 to 4.24% for linear sire model, animal model and threshold sire model, respectively. Rank correlations between random effects of sires obtained with linear and threshold sire models and with linear animal and sire models were 0.82-0.95 and 0.61-0.83, respectively. The estimated genetic correlations between the five different periods were moderate and only significant for 31-180 and 181-365 (r(g) = 0.59), 31-180 and 366-760 (r(g) = 0.52), and 181-365 and 366-760 (r(g) = 0.42). The low genetic correlations in current study would suggest that survival at different periods may be affected by the same genes with different expression or by different genes. Even though the additive genetic variations of survival traits were small, it might be possible to improve these traits by traditional or genomic selection. © 2014 Blackwell Verlag GmbH.
A Method of Q-Matrix Validation for the Linear Logistic Test Model
Baghaei, Purya; Hohensinn, Christine
2017-01-01
The linear logistic test model (LLTM) is a well-recognized psychometric model for examining the components of difficulty in cognitive tests and validating construct theories. The plausibility of the construct model, summarized in a matrix of weights, known as the Q-matrix or weight matrix, is tested by (1) comparing the fit of LLTM with the fit of the Rasch model (RM) using the likelihood ratio (LR) test and (2) by examining the correlation between the Rasch model item parameters and LLTM reconstructed item parameters. The problem with the LR test is that it is almost always significant and, consequently, LLTM is rejected. The drawback of examining the correlation coefficient is that there is no cut-off value or lower bound for the magnitude of the correlation coefficient. In this article we suggest a simulation method to set a minimum benchmark for the correlation between item parameters from the Rasch model and those reconstructed by the LLTM. If the cognitive model is valid then the correlation coefficient between the RM-based item parameters and the LLTM-reconstructed item parameters derived from the theoretical weight matrix should be greater than those derived from the simulated matrices. PMID:28611721
Shang, Yu; Yu, Guoqiang
2014-09-29
Conventional semi-infinite analytical solutions of correlation diffusion equation may lead to errors when calculating blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements in tissues with irregular geometries. Very recently, we created an algorithm integrating a N th-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in homogenous tissues with arbitrary geometries for extraction of BFI (i.e., αD B ). The purpose of this study is to extend the capability of the N th-order linear algorithm for extracting BFI in heterogeneous tissues with arbitrary geometries. The previous linear algorithm was modified to extract BFIs in different types of tissues simultaneously through utilizing DCS data at multiple source-detector separations. We compared the proposed linear algorithm with the semi-infinite homogenous solution in a computer model of adult head with heterogeneous tissue layers of scalp, skull, cerebrospinal fluid, and brain. To test the capability of the linear algorithm for extracting relative changes of cerebral blood flow (rCBF) in deep brain, we assigned ten levels of αD B in the brain layer with a step decrement of 10% while maintaining αD B values constant in other layers. Simulation results demonstrate the accuracy (errors < 3%) of high-order ( N ≥ 5) linear algorithm in extracting BFIs in different tissue layers and rCBF in deep brain. By contrast, the semi-infinite homogenous solution resulted in substantial errors in rCBF (34.5% ≤ errors ≤ 60.2%) and BFIs in different layers. The N th-order linear model simplifies data analysis, thus allowing for online data processing and displaying. Future study will test this linear algorithm in heterogeneous tissues with different levels of blood flow variations and noises.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, L.F.
Calculations for the two-point correlation functions in the scaling limit for two statistical models are presented. In Part I, the Ising model with a linear defect is studied for T < T/sub c/ and T > T/sub c/. The transfer matrix method of Onsager and Kaufman is used. The energy-density correlation is given by functions related to the modified Bessel functions. The dispersion expansion for the spin-spin correlation functions are derived. The dominant behavior for large separations at T not equal to T/sub c/ is extracted. It is shown that these expansions lead to systems of Fredholm integral equations. Inmore » Part II, the electric correlation function of the eight-vertex model for T < T/sub c/ is studied. The eight vertex model decouples to two independent Ising models when the four spin coupling vanishes. To first order in the four-spin coupling, the electric correlation function is related to a three-point function of the Ising model. This relation is systematically investigated and the full dispersion expansion (to first order in four-spin coupling) is obtained. The results is a new kind of structure which, unlike those of many solvable models, is apparently not expressible in terms of linear integral equations.« less
A simple method for identifying parameter correlations in partially observed linear dynamic models.
Li, Pu; Vu, Quoc Dong
2015-12-14
Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a software packet.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data
Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-01-01
Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741
Estimation of the linear mixed integrated Ornstein–Uhlenbeck model
Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate
2017-01-01
ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536
Mathematical modelling and linear stability analysis of laser fusion cutting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hermanns, Torsten; Schulz, Wolfgang; Vossen, Georg
A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.
Wind modeling and lateral control for automatic landing
NASA Technical Reports Server (NTRS)
Holley, W. E.; Bryson, A. E., Jr.
1975-01-01
For the purposes of aircraft control system design and analysis, the wind can be characterized by a mean component which varies with height and by turbulent components which are described by the von Karman correlation model. The aircraft aero-dynamic forces and moments depend linearly on uniform and gradient gust components obtained by averaging over the aircraft's length and span. The correlations of the averaged components are then approximated by the outputs of linear shaping filters forced by white noise. The resulting model of the crosswind shear and turbulence effects is used in the design of a lateral control system for the automatic landing of a DC-8 aircraft.
Water pollution and income relationships: A seemingly unrelated partially linear analysis
NASA Astrophysics Data System (ADS)
Pandit, Mahesh; Paudel, Krishna P.
2016-10-01
We used a seemingly unrelated partially linear model (SUPLM) to address a potential correlation between pollutants (nitrogen, phosphorous, dissolved oxygen and mercury) in an environmental Kuznets curve study. Simulation studies show that the SUPLM performs well to address potential correlation among pollutants. We find that the relationship between income and pollution follows an inverted U-shaped curve for nitrogen and dissolved oxygen and a cubic shaped curve for mercury. Model specification tests suggest that a SUPLM is better specified compared to a parametric model to study the income-pollution relationship. Results suggest a need to continually assess policy effectiveness of pollution reduction as income increases.
Lainscsek, Claudia; Weyhenmeyer, Jonathan; Hernandez, Manuel E; Poizner, Howard; Sejnowski, Terrence J
2013-01-01
Time series analysis with delay differential equations (DDEs) reveals non-linear properties of the underlying dynamical system and can serve as a non-linear time-domain classification tool. Here global DDE models were used to analyze short segments of simulated time series from a known dynamical system, the Rössler system, in high noise regimes. In a companion paper, we apply the DDE model developed here to classify short segments of encephalographic (EEG) data recorded from patients with Parkinson's disease and healthy subjects. Nine simulated subjects in each of two distinct classes were generated by varying the bifurcation parameter b and keeping the other two parameters (a and c) of the Rössler system fixed. All choices of b were in the chaotic parameter range. We diluted the simulated data using white noise ranging from 10 to -30 dB signal-to-noise ratios (SNR). Structure selection was supervised by selecting the number of terms, delays, and order of non-linearity of the model DDE model that best linearly separated the two classes of data. The distances d from the linear dividing hyperplane was then used to assess the classification performance by computing the area A' under the ROC curve. The selected model was tested on untrained data using repeated random sub-sampling validation. DDEs were able to accurately distinguish the two dynamical conditions, and moreover, to quantify the changes in the dynamics. There was a significant correlation between the dynamical bifurcation parameter b of the simulated data and the classification parameter d from our analysis. This correlation still held for new simulated subjects with new dynamical parameters selected from each of the two dynamical regimes. Furthermore, the correlation was robust to added noise, being significant even when the noise was greater than the signal. We conclude that DDE models may be used as a generalizable and reliable classification tool for even small segments of noisy data.
Non-Linear Dynamical Classification of Short Time Series of the Rössler System in High Noise Regimes
Lainscsek, Claudia; Weyhenmeyer, Jonathan; Hernandez, Manuel E.; Poizner, Howard; Sejnowski, Terrence J.
2013-01-01
Time series analysis with delay differential equations (DDEs) reveals non-linear properties of the underlying dynamical system and can serve as a non-linear time-domain classification tool. Here global DDE models were used to analyze short segments of simulated time series from a known dynamical system, the Rössler system, in high noise regimes. In a companion paper, we apply the DDE model developed here to classify short segments of encephalographic (EEG) data recorded from patients with Parkinson’s disease and healthy subjects. Nine simulated subjects in each of two distinct classes were generated by varying the bifurcation parameter b and keeping the other two parameters (a and c) of the Rössler system fixed. All choices of b were in the chaotic parameter range. We diluted the simulated data using white noise ranging from 10 to −30 dB signal-to-noise ratios (SNR). Structure selection was supervised by selecting the number of terms, delays, and order of non-linearity of the model DDE model that best linearly separated the two classes of data. The distances d from the linear dividing hyperplane was then used to assess the classification performance by computing the area A′ under the ROC curve. The selected model was tested on untrained data using repeated random sub-sampling validation. DDEs were able to accurately distinguish the two dynamical conditions, and moreover, to quantify the changes in the dynamics. There was a significant correlation between the dynamical bifurcation parameter b of the simulated data and the classification parameter d from our analysis. This correlation still held for new simulated subjects with new dynamical parameters selected from each of the two dynamical regimes. Furthermore, the correlation was robust to added noise, being significant even when the noise was greater than the signal. We conclude that DDE models may be used as a generalizable and reliable classification tool for even small segments of noisy data. PMID:24379798
Cross-validation analysis for genetic evaluation models for ranking in endurance horses.
García-Ballesteros, S; Varona, L; Valera, M; Gutiérrez, J P; Cervantes, I
2018-01-01
Ranking trait was used as a selection criterion for competition horses to estimate racing performance. In the literature the most common approaches to estimate breeding values are the linear or threshold statistical models. However, recent studies have shown that a Thurstonian approach was able to fix the race effect (competitive level of the horses that participate in the same race), thus suggesting a better prediction accuracy of breeding values for ranking trait. The aim of this study was to compare the predictability of linear, threshold and Thurstonian approaches for genetic evaluation of ranking in endurance horses. For this purpose, eight genetic models were used for each approach with different combinations of random effects: rider, rider-horse interaction and environmental permanent effect. All genetic models included gender, age and race as systematic effects. The database that was used contained 4065 ranking records from 966 horses and that for the pedigree contained 8733 animals (47% Arabian horses), with an estimated heritability around 0.10 for the ranking trait. The prediction ability of the models for racing performance was evaluated using a cross-validation approach. The average correlation between real and predicted performances across genetic models was around 0.25 for threshold, 0.58 for linear and 0.60 for Thurstonian approaches. Although no significant differences were found between models within approaches, the best genetic model included: the rider and rider-horse random effects for threshold, only rider and environmental permanent effects for linear approach and all random effects for Thurstonian approach. The absolute correlations of predicted breeding values among models were higher between threshold and Thurstonian: 0.90, 0.91 and 0.88 for all animals, top 20% and top 5% best animals. For rank correlations these figures were 0.85, 0.84 and 0.86. The lower values were those between linear and threshold approaches (0.65, 0.62 and 0.51). In conclusion, the Thurstonian approach is recommended for the routine genetic evaluations for ranking in endurance horses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sablik, M.J.; Augustyniak, B.; Chmielewski, M.
1996-04-01
The almost linear dependence of the maximum Barkhausen noise signal amplitude on stress has made it a tool for nondestructive evaluation of residual stress. Recently, a model has been developed to account for the stress dependence of the Barkhausen noise signal. The model uses the development of Alessandro {ital et} {ital al}. who use coupled Langevin equations to derive an expression for the Barkhausen noise power spectrum. The model joins this expression to the magnetomechanical hysteresis model of Sablik {ital et} {ital al}., obtaining both a hysteretic and stress-dependent result for the magnetic-field-dependent Barkhausen noise envelope and obtaining specifically themore » almost linear stress dependence of the Barkhausen noise maximum experimentally. In this paper, we extend the model to derive the angular dependence observed by Kwun of the Barkhausen noise amplitude when stress axis is taken at different angles relative to magnetic field. We also apply the model to the experimental observation that in XC10 French steel, there is an apparent almost linear correlation with stress of hysteresis loss and of the integral of the Barkhausen noise signal over applied field {ital H}. Further, the two quantities, Barkhausen noise integral and hysteresis loss, are linearly correlated with each other. The model shows how that behavior is to be expected for the measured steel because of its sharply rising hysteresis curve. {copyright} {ital 1996 American Institute of Physics.}« less
NASA Astrophysics Data System (ADS)
Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie
2018-05-01
Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.
The role of climatic variables in winter cereal yields: a retrospective analysis.
Luo, Qunying; Wen, Li
2015-02-01
This study examined the effects of observed climate including [CO2] on winter cereal [winter wheat (Triticum aestivum), barley (Hordeum vulgare) and oat (Avena sativa)] yields by adopting robust statistical analysis/modelling approaches (i.e. autoregressive fractionally integrated moving average, generalised addition model) based on long time series of historical climate data and cereal yield data at three locations (Moree, Dubbo and Wagga Wagga) in New South Wales, Australia. Research results show that (1) growing season rainfall was significantly, positively and non-linearly correlated with crop yield at all locations considered; (2) [CO2] was significantly, positively and non-linearly correlated with crop yields in all cases except wheat and barley yields at Wagga Wagga; (3) growing season maximum temperature was significantly, negatively and non-linearly correlated with crop yields at Dubbo and Moree (except for barley); and (4) radiation was only significantly correlated with oat yield at Wagga Wagga. This information will help to identify appropriate management adaptation options in dealing with the risk and in taking the opportunities of climate change.
Computation of linear acceleration through an internal model in the macaque cerebellum
Laurens, Jean; Meng, Hui; Angelaki, Dora E.
2013-01-01
A combination of theory and behavioral findings has supported a role for internal models in the resolution of sensory ambiguities and sensorimotor processing. Although the cerebellum has been proposed as a candidate for implementation of internal models, concrete evidence from neural responses is lacking. Here we exploit un-natural motion stimuli, which induce incorrect self-motion perception and eye movements, to explore the neural correlates of an internal model proposed to compensate for Einstein’s equivalence principle and generate neural estimates of linear acceleration and gravity. We show that caudal cerebellar vermis Purkinje cells and cerebellar nuclei neurons selective for actual linear acceleration also encode erroneous linear acceleration, as expected from the internal model hypothesis, even when no actual linear acceleration occurs. These findings provide strong evidence that the cerebellum might be involved in the implementation of internal models that mimic physical principles to interpret sensory signals, as previously hypothesized by theorists. PMID:24077562
Wu, Jibo
2016-01-01
In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.
Observation Impacts for Longer Forecast Lead-Times
NASA Astrophysics Data System (ADS)
Mahajan, R.; Gelaro, R.; Todling, R.
2013-12-01
Observation impact on forecasts evaluated using adjoint-based techniques (e.g. Langland and Baker, 2004) are limited by the validity of the assumptions underlying the forecasting model adjoint. Most applications of this approach have focused on deriving observation impacts on short-range forecasts (e.g. 24-hour) in part to stay well within linearization assumptions. The most widely used measure of observation impact relies on the availability of the analysis for verifying the forecasts. As pointed out by Gelaro et al. (2007), and more recently by Todling (2013), this introduces undesirable correlations in the measure that are likely to affect the resulting assessment of the observing system. Stappers and Barkmeijer (2012) introduced a technique that, in principle, allows extending the validity of tangent linear and corresponding adjoint models to longer lead-times, thereby reducing the correlations in the measures used for observation impact assessments. The methodology provides the means to better represent linearized models by making use of Gaussian quadrature relations to handle various underlying non-linear model trajectories. The formulation is exact for particular bi-linear dynamics; it corresponds to an approximation for general-type nonlinearities and must be tested for large atmospheric models. The present work investigates the approach of Stappers and Barkmeijer (2012)in the context of NASA's Goddard Earth Observing System Version 5 (GEOS-5) atmospheric data assimilation system (ADAS). The goal is to calculate observation impacts in the GEOS-5 ADAS for forecast lead-times of at least 48 hours in order to reduce the potential for undesirable correlations that occur at shorter forecast lead times. References [1]Langland, R. H., and N. L. Baker, 2004: Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56A, 189-201. [2] Gelaro, R., Y. Zhu, and R. M. Errico, 2007: Examination of various-order adjoint-based approximations of observation impact. Meteoroloische Zeitschrift, 16, 685-692. [3]Stappers, R. J. J., and J. Barkmeijer, 2012: Optimal linearization trajectories for tangent linear models. Q. J. R. Meteorol. Soc., 138, 170-184. [4] Todling, R. 2013: Comparing two approaches for assessing observation impact. Mon. Wea. Rev., 141, 1484-1505.
Nicholas A. Povak; Paul F. Hessburg; Todd C. McDonnell; Keith M. Reynolds; Timothy J. Sullivan; R. Brion Salter; Bernard J. Crosby
2014-01-01
Accurate estimates of soil mineral weathering are required for regional critical load (CL) modeling to identify ecosystems at risk of the deleterious effects from acidification. Within a correlative modeling framework, we used modeled catchment-level base cation weathering (BCw) as the response variable to identify key environmental correlates and predict a continuous...
NASA Astrophysics Data System (ADS)
Guan, Lin; Fang, Yuwen; Li, Kongzhai; Zeng, Chunhua; Yang, Fengzao
2018-09-01
In this paper, we investigate the role of correlated multiplicative (κ1) and additive (κ2) noises in a modified energy conversion depot model, at which it is added a linear term in the conversion of internal energy of active Brownian particles (ABPs). The linear term (a1 ≠ 0 . 0) in energy conversion model breaks the symmetry of the potential to generate motion of the ABPs with a net transport velocity. Adopt a nonlinear Langevin approach, the transport properties of the ABPs have been discussed, and our results show that: (i) the transport velocity <υ1 > of the ABPs are always positive whether the correlation intensity λ = 0 . 0 or not; (ii) for a small value of the multiplicative noise intensity κ1, the variation of <υ1 > with λ shows a minimum, there exists an optimal value of the correlation intensity λ at which the <υ1 > of the ABPs is minimized. But for a large value of κ1, the <υ1 > monotonically decreases; (iii) the transport velocity <υ1 > increases with the increase of the κ1 or κ2, i.e., the multiplicative or additive noise can facilitate the transport of the ABPs; and (iv) the effective diffusion increases with the increase of a1, namely, the linear term in modified energy conversion model of the ABPs can enhance the diffusion of the ABPs.
Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.
2015-01-01
Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.
Dong, J Q; Zhang, X Y; Wang, S Z; Jiang, X F; Zhang, K; Ma, G W; Wu, M Q; Li, H; Zhang, H
2018-01-01
Plasma very low-density lipoprotein (VLDL) can be used to select for low body fat or abdominal fat (AF) in broilers, but its correlation with AF is limited. We investigated whether any other biochemical indicator can be used in combination with VLDL for a better selective effect. Nineteen plasma biochemical indicators were measured in male chickens from the Northeast Agricultural University broiler lines divergently selected for AF content (NEAUHLF) in the fed state at 46 and 48 d of age. The average concentration of every parameter for the 2 d was used for statistical analysis. Levels of these 19 plasma biochemical parameters were compared between the lean and fat lines. The phenotypic correlations between these plasma biochemical indicators and AF traits were analyzed. Then, multiple linear regression models were constructed to select the best model used for selecting against AF content. and the heritabilities of plasma indicators contained in the best models were estimated. The results showed that 11 plasma biochemical indicators (triglycerides, total bile acid, total protein, globulin, albumin/globulin, aspartate transaminase, alanine transaminase, gamma-glutamyl transpeptidase, uric acid, creatinine, and VLDL) differed significantly between the lean and fat lines (P < 0.01), and correlated significantly with AF traits (P < 0.05). The best multiple linear regression models based on albumin/globulin, VLDL, triglycerides, globulin, total bile acid, and uric acid, had higher R2 (0.73) than the model based only on VLDL (0.21). The plasma parameters included in the best models had moderate heritability estimates (0.21 ≤ h2 ≤ 0.43). These results indicate that these multiple linear regression models can be used to select for lean broiler chickens. © 2017 Poultry Science Association Inc.
Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.
Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg
2009-11-01
G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.
CORRELATION PURSUIT: FORWARD STEPWISE VARIABLE SELECTION FOR INDEX MODELS
Zhong, Wenxuan; Zhang, Tingting; Zhu, Yu; Liu, Jun S.
2012-01-01
In this article, a stepwise procedure, correlation pursuit (COP), is developed for variable selection under the sufficient dimension reduction framework, in which the response variable Y is influenced by the predictors X1, X2, …, Xp through an unknown function of a few linear combinations of them. Unlike linear stepwise regression, COP does not impose a special form of relationship (such as linear) between the response variable and the predictor variables. The COP procedure selects variables that attain the maximum correlation between the transformed response and the linear combination of the variables. Various asymptotic properties of the COP procedure are established, and in particular, its variable selection performance under diverging number of predictors and sample size has been investigated. The excellent empirical performance of the COP procedure in comparison with existing methods are demonstrated by both extensive simulation studies and a real example in functional genomics. PMID:23243388
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
ISW-galaxy cross-correlation in K-mouflage
NASA Astrophysics Data System (ADS)
Benevento, G.; Bartolo, N.; Liguori, M.
2018-01-01
Cross-correlations between the cosmic microwave background and the galaxy distribution can probe the linear growth rate of cosmic structures, thus providing a powerful tool to investigate different Dark Energy and Modified Gravity models. We explore the possibility of using this observable to probe a particular class of Modified Gravity models, called K-mouflage.
Gómez-Extremera, Manuel; Carpena, Pedro; Ivanov, Plamen Ch; Bernaola-Galván, Pedro A
2016-04-01
We systematically study the scaling properties of the magnitude and sign of the fluctuations in correlated time series, which is a simple and useful approach to distinguish between systems with different dynamical properties but the same linear correlations. First, we decompose artificial long-range power-law linearly correlated time series into magnitude and sign series derived from the consecutive increments in the original series, and we study their correlation properties. We find analytical expressions for the correlation exponent of the sign series as a function of the exponent of the original series. Such expressions are necessary for modeling surrogate time series with desired scaling properties. Next, we study linear and nonlinear correlation properties of series composed as products of independent magnitude and sign series. These surrogate series can be considered as a zero-order approximation to the analysis of the coupling of magnitude and sign in real data, a problem still open in many fields. We find analytical results for the scaling behavior of the composed series as a function of the correlation exponents of the magnitude and sign series used in the composition, and we determine the ranges of magnitude and sign correlation exponents leading to either single scaling or to crossover behaviors. Finally, we obtain how the linear and nonlinear properties of the composed series depend on the correlation exponents of their magnitude and sign series. Based on this information we propose a method to generate surrogate series with controlled correlation exponent and multifractal spectrum.
NASA Astrophysics Data System (ADS)
Thingbijam, Kiran Kumar; Galis, Martin; Vyas, Jagdish; Mai, P. Martin
2017-04-01
We examine the spatial interdependence between kinematic parameters of earthquake rupture, which include slip, rise-time (total duration of slip), acceleration time (time-to-peak slip velocity), peak slip velocity, and rupture velocity. These parameters were inferred from dynamic rupture models obtained by simulating spontaneous rupture on faults with varying degree of surface-roughness. We observe that the correlations between these parameters are better described by non-linear correlations (that is, on logarithm-logarithm scale) than by linear correlations. Slip and rise-time are positively correlated while these two parameters do not correlate with acceleration time, peak slip velocity, and rupture velocity. On the other hand, peak slip velocity correlates positively with rupture velocity but negatively with acceleration time. Acceleration time correlates negatively with rupture velocity. However, the observed correlations could be due to weak heterogeneity of the slip distributions given by the dynamic models. Therefore, the observed correlations may apply only to those parts of rupture plane with weak slip heterogeneity if earthquake-rupture associate highly heterogeneous slip distributions. Our findings will help to improve pseudo-dynamic rupture generators for efficient broadband ground-motion simulations for seismic hazard studies.
NASA Astrophysics Data System (ADS)
Ionita, Ciprian N.; Bednarek, Daniel R.; Rudin, Stephen
2012-03-01
Intracranial aneurysm treatment with flow diverters (FD) is a new minimally invasive approach, recently approved for use in human patients. Attempts to correlate the flow reduction observed in angiograms with a parameter related to the FD structure have not been totally successful. To find the proper parameter, we investigated four porous-media flow models. The models describing the relation between the pressure drop and flow velocity that are investigated include the capillary theory linear model (CTLM), the drag force linear model (DFLM), the simple quadratic model (SQM) and the modified quadratic model (MQM). Proportionality parameters are referred to as permeability for the linear models and resistance for the quadratic ones. A two stage experiment was performed. First, we verified flow model validity by placing six different stainless-steel meshes, resembling FD structures, in known flow conditions. The best flow model was used for the second stage, where six different FD's were inserted in aneurysm phantoms and flow modification was estimated using angiographically derived time density curves (TDC). Finally, TDC peak variation was compared with the FD parameter. Model validity experiments indicated errors of: 70% for the linear models, 26% for the SQM and 7% for the MQM. The resistance calculated according to the MQM model correlated well with the contrast flow reduction. Results indicate that resistance calculated according to MQM is appropriate to characterize the FD and could explain the flow modification observed in angiograms.
Can a minimalist model of wind forced baroclinic Rossby waves produce reasonable results?
NASA Astrophysics Data System (ADS)
Watanabe, Wandrey B.; Polito, Paulo S.; da Silveira, Ilson C. A.
2016-04-01
The linear theory predicts that Rossby waves are the large scale mechanism of adjustment to perturbations of the geophysical fluid. Satellite measurements of sea level anomaly (SLA) provided sturdy evidence of the existence of these waves. Recent studies suggest that the variability in the altimeter records is mostly due to mesoscale nonlinear eddies and challenges the original interpretation of westward propagating features as Rossby waves. The objective of this work is to test whether a classic linear dynamic model is a reasonable explanation for the observed SLA. A linear-reduced gravity non-dispersive Rossby wave model is used to estimate the SLA forced by direct and remote wind stress. Correlations between model results and observations are up to 0.88. The best agreement is in the tropical region of all ocean basins. These correlations decrease towards insignificance in mid-latitudes. The relative contributions of eastern boundary (remote) forcing and local wind forcing in the generation of Rossby waves are also estimated and suggest that the main wave forming mechanism is the remote forcing. Results suggest that linear long baroclinic Rossby wave dynamics explain a significant part of the SLA annual variability at least in the tropical oceans.
Yoo, Kwangsun; Rosenberg, Monica D; Hsu, Wei-Ting; Zhang, Sheng; Li, Chiang-Shan R; Scheinost, Dustin; Constable, R Todd; Chun, Marvin M
2018-02-15
Connectome-based predictive modeling (CPM; Finn et al., 2015; Shen et al., 2017) was recently developed to predict individual differences in traits and behaviors, including fluid intelligence (Finn et al., 2015) and sustained attention (Rosenberg et al., 2016a), from functional brain connectivity (FC) measured with fMRI. Here, using the CPM framework, we compared the predictive power of three different measures of FC (Pearson's correlation, accordance, and discordance) and two different prediction algorithms (linear and partial least square [PLS] regression) for attention function. Accordance and discordance are recently proposed FC measures that respectively track in-phase synchronization and out-of-phase anti-correlation (Meskaldji et al., 2015). We defined connectome-based models using task-based or resting-state FC data, and tested the effects of (1) functional connectivity measure and (2) feature-selection/prediction algorithm on individualized attention predictions. Models were internally validated in a training dataset using leave-one-subject-out cross-validation, and externally validated with three independent datasets. The training dataset included fMRI data collected while participants performed a sustained attention task and rested (N = 25; Rosenberg et al., 2016a). The validation datasets included: 1) data collected during performance of a stop-signal task and at rest (N = 83, including 19 participants who were administered methylphenidate prior to scanning; Farr et al., 2014a; Rosenberg et al., 2016b), 2) data collected during Attention Network Task performance and rest (N = 41, Rosenberg et al., in press), and 3) resting-state data and ADHD symptom severity from the ADHD-200 Consortium (N = 113; Rosenberg et al., 2016a). Models defined using all combinations of functional connectivity measure (Pearson's correlation, accordance, and discordance) and prediction algorithm (linear and PLS regression) predicted attentional abilities, with correlations between predicted and observed measures of attention as high as 0.9 for internal validation, and 0.6 for external validation (all p's < 0.05). Models trained on task data outperformed models trained on rest data. Pearson's correlation and accordance features generally showed a small numerical advantage over discordance features, while PLS regression models were usually better than linear regression models. Overall, in addition to correlation features combined with linear models (Rosenberg et al., 2016a), it is useful to consider accordance features and PLS regression for CPM. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Borjigin, Sumuya; Yang, Yating; Yang, Xiaoguang; Sun, Leilei
2018-03-01
Many researchers have realized that there is a strong correlation between stock prices and macroeconomy. In order to make this relationship clear, a lot of studies have been done. However, the causal relationship between stock prices and macroeconomy has still not been well explained. A key point is that, most of the existing research adopts linear and stable models to investigate the correlation of stock prices and macroeconomy, while the real causality of that may be nonlinear and dynamic. To fill this research gap, we investigate the nonlinear and dynamic causal relationships between stock prices and macroeconomy. Based on the case of China's stock prices and acroeconomy measures from January 1992 to March 2017, we compare the linear Granger causality test models with nonlinear ones. Results demonstrate that the nonlinear dynamic Granger causality is much stronger than linear Granger causality. From the perspective of nonlinear dynamic Granger causality, China's stock prices can be viewed as "national economic barometer". On the one hand, this study will encourage researchers to take nonlinearity and dynamics into account when they investigate the correlation of stock prices and macroeconomy; on the other hand, our research can guide regulators and investors to make better decisions.
Koerner, Tess K; Zhang, Yang
2017-02-27
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
Modelling female fertility traits in beef cattle using linear and non-linear models.
Naya, H; Peñagaricano, F; Urioste, J I
2017-06-01
Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2 < 0.08 and r < 0.13, for linear models; h 2 > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.
Advanced Statistical Analyses to Reduce Inconsistency of Bond Strength Data.
Minamino, T; Mine, A; Shintani, A; Higashi, M; Kawaguchi-Uemura, A; Kabetani, T; Hagino, R; Imai, D; Tajiri, Y; Matsumoto, M; Yatani, H
2017-11-01
This study was designed to clarify the interrelationship of factors that affect the value of microtensile bond strength (µTBS), focusing on nondestructive testing by which information of the specimens can be stored and quantified. µTBS test specimens were prepared from 10 noncarious human molars. Six factors of µTBS test specimens were evaluated: presence of voids at the interface, X-ray absorption coefficient of resin, X-ray absorption coefficient of dentin, length of dentin part, size of adhesion area, and individual differences of teeth. All specimens were observed nondestructively by optical coherence tomography and micro-computed tomography before µTBS testing. After µTBS testing, the effect of these factors on µTBS data was analyzed by the general linear model, linear mixed effects regression model, and nonlinear regression model with 95% confidence intervals. By the general linear model, a significant difference in individual differences of teeth was observed ( P < 0.001). A significantly positive correlation was shown between µTBS and length of dentin part ( P < 0.001); however, there was no significant nonlinearity ( P = 0.157). Moreover, a significantly negative correlation was observed between µTBS and size of adhesion area ( P = 0.001), with significant nonlinearity ( P = 0.014). No correlation was observed between µTBS and X-ray absorption coefficient of resin ( P = 0.147), and there was no significant nonlinearity ( P = 0.089). Additionally, a significantly positive correlation was observed between µTBS and X-ray absorption coefficient of dentin ( P = 0.022), with significant nonlinearity ( P = 0.036). A significant difference was also observed between the presence and absence of voids by linear mixed effects regression analysis. Our results showed correlations between various parameters of tooth specimens and µTBS data. To evaluate the performance of the adhesive more precisely, the effect of tooth variability and a method to reduce variation in bond strength values should also be considered.
Ayres, D R; Pereira, R J; Boligon, A A; Silva, F F; Schenkel, F S; Roso, V M; Albuquerque, L G
2013-12-01
Cattle resistance to ticks is measured by the number of ticks infesting the animal. The model used for the genetic analysis of cattle resistance to ticks frequently requires logarithmic transformation of the observations. The objective of this study was to evaluate the predictive ability and goodness of fit of different models for the analysis of this trait in cross-bred Hereford x Nellore cattle. Three models were tested: a linear model using logarithmic transformation of the observations (MLOG); a linear model without transformation of the observations (MLIN); and a generalized linear Poisson model with residual term (MPOI). All models included the classificatory effects of contemporary group and genetic group and the covariates age of animal at the time of recording and individual heterozygosis, as well as additive genetic effects as random effects. Heritability estimates were 0.08 ± 0.02, 0.10 ± 0.02 and 0.14 ± 0.04 for MLIN, MLOG and MPOI models, respectively. The model fit quality, verified by deviance information criterion (DIC) and residual mean square, indicated fit superiority of MPOI model. The predictive ability of the models was compared by validation test in independent sample. The MPOI model was slightly superior in terms of goodness of fit and predictive ability, whereas the correlations between observed and predicted tick counts were practically the same for all models. A higher rank correlation between breeding values was observed between models MLOG and MPOI. Poisson model can be used for the selection of tick-resistant animals. © 2013 Blackwell Verlag GmbH.
Noise Suppression and Surplus Synchrony by Coincidence Detection
Schultze-Kraft, Matthias; Diesmann, Markus; Grün, Sonja; Helias, Moritz
2013-01-01
The functional significance of correlations between action potentials of neurons is still a matter of vivid debate. In particular, it is presently unclear how much synchrony is caused by afferent synchronized events and how much is intrinsic due to the connectivity structure of cortex. The available analytical approaches based on the diffusion approximation do not allow to model spike synchrony, preventing a thorough analysis. Here we theoretically investigate to what extent common synaptic afferents and synchronized inputs each contribute to correlated spiking on a fine temporal scale between pairs of neurons. We employ direct simulation and extend earlier analytical methods based on the diffusion approximation to pulse-coupling, allowing us to introduce precisely timed correlations in the spiking activity of the synaptic afferents. We investigate the transmission of correlated synaptic input currents by pairs of integrate-and-fire model neurons, so that the same input covariance can be realized by common inputs or by spiking synchrony. We identify two distinct regimes: In the limit of low correlation linear perturbation theory accurately determines the correlation transmission coefficient, which is typically smaller than unity, but increases sensitively even for weakly synchronous inputs. In the limit of high input correlation, in the presence of synchrony, a qualitatively new picture arises. As the non-linear neuronal response becomes dominant, the output correlation becomes higher than the total correlation in the input. This transmission coefficient larger unity is a direct consequence of non-linear neural processing in the presence of noise, elucidating how synchrony-coded signals benefit from these generic properties present in cortical networks. PMID:23592953
Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements
NASA Astrophysics Data System (ADS)
Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego
2018-07-01
In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.
Gartner, Thomas E; Jayaraman, Arthi
2018-01-17
In this paper, we apply molecular simulation and liquid state theory to uncover the structure and thermodynamics of homopolymer blends of the same chemistry and varying chain architecture in the presence of explicit solvent species. We use hybrid Monte Carlo (MC)/molecular dynamics (MD) simulations in the Gibbs ensemble to study the swelling of ∼12 000 g mol -1 linear, cyclic, and 4-arm star polystyrene chains in toluene. Our simulations show that the macroscopic swelling response is indistinguishable between the various architectures and matches published experimental data for the solvent annealing of linear polystyrene by toluene vapor. We then use standard MD simulations in the NPT ensemble along with polymer reference interaction site model (PRISM) theory to calculate effective polymer-solvent and polymer-polymer Flory-Huggins interaction parameters (χ eff ) in these systems. As seen in the macroscopic swelling results, there are no significant differences in the polymer-solvent and polymer-polymer χ eff between the various architectures. Despite similar macroscopic swelling and effective interaction parameters between various architectures, the pair correlation function between chain centers-of-mass indicates stronger correlations between cyclic or star chains in the linear-cyclic blends and linear-star blends, compared to linear chain-linear chain correlations. Furthermore, we note striking similarities in the chain-level correlations and the radius of gyration of cyclic and 4-arm star architectures of identical molecular weight. Our results indicate that the cyclic and star chains are 'smaller' and 'harder' than their linear counterparts, and through comparison with MD simulations of blends of soft spheres with varying hardness and size we suggest that these macromolecular characteristics are the source of the stronger cyclic-cyclic and star-star correlations.
Comparison of co-expression measures: mutual information, correlation, and model based indices.
Song, Lin; Langfelder, Peter; Horvath, Steve
2012-12-09
Co-expression measures are often used to define networks among genes. Mutual information (MI) is often used as a generalized correlation measure. It is not clear how much MI adds beyond standard (robust) correlation measures or regression model based association measures. Further, it is important to assess what transformations of these and other co-expression measures lead to biologically meaningful modules (clusters of genes). We provide a comprehensive comparison between mutual information and several correlation measures in 8 empirical data sets and in simulations. We also study different approaches for transforming an adjacency matrix, e.g. using the topological overlap measure. Overall, we confirm close relationships between MI and correlation in all data sets which reflects the fact that most gene pairs satisfy linear or monotonic relationships. We discuss rare situations when the two measures disagree. We also compare correlation and MI based approaches when it comes to defining co-expression network modules. We show that a robust measure of correlation (the biweight midcorrelation transformed via the topological overlap transformation) leads to modules that are superior to MI based modules and maximal information coefficient (MIC) based modules in terms of gene ontology enrichment. We present a function that relates correlation to mutual information which can be used to approximate the mutual information from the corresponding correlation coefficient. We propose the use of polynomial or spline regression models as an alternative to MI for capturing non-linear relationships between quantitative variables. The biweight midcorrelation outperforms MI in terms of elucidating gene pairwise relationships. Coupled with the topological overlap matrix transformation, it often leads to more significantly enriched co-expression modules. Spline and polynomial networks form attractive alternatives to MI in case of non-linear relationships. Our results indicate that MI networks can safely be replaced by correlation networks when it comes to measuring co-expression relationships in stationary data.
Correlation Weights in Multiple Regression
ERIC Educational Resources Information Center
Waller, Niels G.; Jones, Jeff A.
2010-01-01
A general theory on the use of correlation weights in linear prediction has yet to be proposed. In this paper we take initial steps in developing such a theory by describing the conditions under which correlation weights perform well in population regression models. Using OLS weights as a comparison, we define cases in which the two weighting…
Volatility of linear and nonlinear time series
NASA Astrophysics Data System (ADS)
Kalisky, Tomer; Ashkenazy, Yosef; Havlin, Shlomo
2005-07-01
Previous studies indicated that nonlinear properties of Gaussian distributed time series with long-range correlations, ui , can be detected and quantified by studying the correlations in the magnitude series ∣ui∣ , the “volatility.” However, the origin for this empirical observation still remains unclear and the exact relation between the correlations in ui and the correlations in ∣ui∣ is still unknown. Here we develop analytical relations between the scaling exponent of linear series ui and its magnitude series ∣ui∣ . Moreover, we find that nonlinear time series exhibit stronger (or the same) correlations in the magnitude time series compared with linear time series with the same two-point correlations. Based on these results we propose a simple model that generates multifractal time series by explicitly inserting long range correlations in the magnitude series; the nonlinear multifractal time series is generated by multiplying a long-range correlated time series (that represents the magnitude series) with uncorrelated time series [that represents the sign series sgn(ui) ]. We apply our techniques on daily deep ocean temperature records from the equatorial Pacific, the region of the El-Ninõ phenomenon, and find: (i) long-range correlations from several days to several years with 1/f power spectrum, (ii) significant nonlinear behavior as expressed by long-range correlations of the volatility series, and (iii) broad multifractal spectrum.
Identifying Crucial Parameter Correlations Maintaining Bursting Activity
Doloc-Mihu, Anca; Calabrese, Ronald L.
2014-01-01
Recent experimental and computational studies suggest that linearly correlated sets of parameters (intrinsic and synaptic properties of neurons) allow central pattern-generating networks to produce and maintain their rhythmic activity regardless of changing internal and external conditions. To determine the role of correlated conductances in the robust maintenance of functional bursting activity, we used our existing database of half-center oscillator (HCO) model instances of the leech heartbeat CPG. From the database, we identified functional activity groups of burster (isolated neuron) and half-center oscillator model instances and realistic subgroups of each that showed burst characteristics (principally period and spike frequency) similar to the animal. To find linear correlations among the conductance parameters maintaining functional leech bursting activity, we applied Principal Component Analysis (PCA) to each of these four groups. PCA identified a set of three maximal conductances (leak current, Leak; a persistent K current, K2; and of a persistent Na+ current, P) that correlate linearly for the two groups of burster instances but not for the HCO groups. Visualizations of HCO instances in a reduced space suggested that there might be non-linear relationships between these parameters for these instances. Experimental studies have shown that period is a key attribute influenced by modulatory inputs and temperature variations in heart interneurons. Thus, we explored the sensitivity of period to changes in maximal conductances of Leak, K2, and P, and we found that for our realistic bursters the effect of these parameters on period could not be assessed because when varied individually bursting activity was not maintained. PMID:24945358
NASA Astrophysics Data System (ADS)
Morén, B.; Larsson, T.; Carlsson Tedgren, Å.
2018-03-01
High dose-rate brachytherapy is a method for cancer treatment where the radiation source is placed within the body, inside or close to a tumour. For dose planning, mathematical optimization techniques are being used in practice and the most common approach is to use a linear model which penalizes deviations from specified dose limits for the tumour and for nearby organs. This linear penalty model is easy to solve, but its weakness lies in the poor correlation of its objective value and the dose-volume objectives that are used clinically to evaluate dose distributions. Furthermore, the model contains parameters that have no clear clinical interpretation. Another approach for dose planning is to solve mixed-integer optimization models with explicit dose-volume constraints which include parameters that directly correspond to dose-volume objectives, and which are therefore tangible. The two mentioned models take the overall goals for dose planning into account in fundamentally different ways. We show that there is, however, a mathematical relationship between them by deriving a linear penalty model from a dose-volume model. This relationship has not been established before and improves the understanding of the linear penalty model. In particular, the parameters of the linear penalty model can be interpreted as dual variables in the dose-volume model.
Modified Regression Correlation Coefficient for Poisson Regression Model
NASA Astrophysics Data System (ADS)
Kaengthong, Nattacha; Domthong, Uthumporn
2017-09-01
This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).
Linear Modeling and Evaluation of Controls on Flow Response in Western Post-Fire Watersheds
NASA Astrophysics Data System (ADS)
Saxe, S.; Hogue, T. S.; Hay, L.
2015-12-01
This research investigates the impact of wildfires on watershed flow regimes throughout the western United States, specifically focusing on evaluation of fire events within specified subregions and determination of the impact of climate and geophysical variables in post-fire flow response. Fire events were collected through federal and state-level databases and streamflow data were collected from U.S. Geological Survey stream gages. 263 watersheds were identified with at least 10 years of continuous pre-fire daily streamflow records and 5 years of continuous post-fire daily flow records. For each watershed, percent changes in runoff ratio (RO), annual seven day low-flows (7Q2) and annual seven day high-flows (7Q10) were calculated from pre- to post-fire. Numerous independent variables were identified for each watershed and fire event, including topographic, land cover, climate, burn severity, and soils data. The national watersheds were divided into five regions through K-clustering and a lasso linear regression model, applying the Leave-One-Out calibration method, was calculated for each region. Nash-Sutcliffe Efficiency (NSE) was used to determine the accuracy of the resulting models. The regions encompassing the United States along and west of the Rocky Mountains, excluding the coastal watersheds, produced the most accurate linear models. The Pacific coast region models produced poor and inconsistent results, indicating that the regions need to be further subdivided. Presently, RO and HF response variables appear to be more easily modeled than LF. Results of linear regression modeling showed varying importance of watershed and fire event variables, with conflicting correlation between land cover types and soil types by region. The addition of further independent variables and constriction of current variables based on correlation indicators is ongoing and should allow for more accurate linear regression modeling.
Normative biometrics for fetal ocular growth using volumetric MRI reconstruction.
Velasco-Annis, Clemente; Gholipour, Ali; Afacan, Onur; Prabhu, Sanjay P; Estroff, Judy A; Warfield, Simon K
2015-04-01
To determine normative ranges for fetal ocular biometrics between 19 and 38 weeks gestational age (GA) using volumetric MRI reconstruction. The 3D images of 114 healthy fetuses between 19 and 38 weeks GA were created using super-resolution volume reconstructions from MRI slice acquisitions. These 3D images were semi-automatically segmented to measure fetal orbit volume, binocular distance (BOD), interocular distance (IOD), and ocular diameter (OD). All biometry correlated with GA (Volume, Pearson's correlation coefficient (CC) = 0.9680; BOD, CC = 0.9552; OD, CC = 0.9445; and IOD, CC = 0.8429), and growth curves were plotted against linear and quadratic growth models. Regression analysis showed quadratic models to best fit BOD, IOD, and OD and a linear model to best fit volume. Orbital volume had the greatest correlation with GA, although BOD and OD also showed strong correlation. The normative data found in this study may be helpful for the detection of congenital fetal anomalies with more consistent measurements than are currently available. © 2015 John Wiley & Sons, Ltd. © 2015 John Wiley & Sons, Ltd.
Ekdahl, Anja; Johansson, Maria C; Ahnoff, Martin
2013-04-01
Matrix effects on electrospray ionization were investigated for plasma samples analysed by hydrophilic interaction chromatography (HILIC) in gradient elution mode, and HILIC columns of different chemistries were tested for separation of plasma components and model analytes. By combining mass spectral data with post-column infusion traces, the following components of protein-precipitated plasma were identified and found to have significant effect on ionization: urea, creatinine, phosphocholine, lysophosphocholine, sphingomyelin, sodium ion, chloride ion, choline and proline betaine. The observed effect on ionization was both matrix-component and analyte dependent. The separation of identified plasma components and model analytes on eight columns was compared, using pair-wise linear correlation analysis and principal component analysis (PCA). Large changes in selectivity could be obtained by change of column, while smaller changes were seen when the mobile phase buffer was changed from ammonium formate pH 3.0 to ammonium acetate pH 4.5. While results from PCA and linear correlation analysis were largely in accord, linear correlation analysis was judged to be more straight-forward in terms of conduction and interpretation.
Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming
2016-01-01
Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.
Yin, Ping; Xiong, Hua; Liu, Yi; Sah, Shambhu K; Zeng, Chun; Wang, Jingjie; Li, Yongmei; Hong, Nan
2018-01-01
To investigate the application value of using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) with extended Tofts linear model for relapsing-remitting multiple sclerosis (RRMS) and its correlation with expanded disability status scale (EDSS) scores and disease duration. Thirty patients with multiple sclerosis (MS) underwent conventional magnetic resonance imaging (MRI) and DCE-MRI with a 3.0 Tesla MR scanner. An extended Tofts linear model was used to quantitatively measure MR imaging biomarkers. The histogram parameters and correlation among imaging biomarkers, EDSS scores, and disease duration were also analyzed. The MR imaging biomarkers volume transfer constant (K trans ), volume of the extravascular extracellular space per unit volume of tissue (Ve), fractional plasma volume (V p ), cerebral blood flow (CBF), and cerebral blood volume (CBV) of contrast-enhancing (CE) lesions were significantly higher (P < 0.05) than those of nonenhancing (NE) lesions and normal-appearing white matter (NAWM) regions. The skewness of Ve value in CE lesions was more close to normal distribution. There was no significant correlation among the biomarkers with the EDSS scores and disease duration (P > 0.05). Our study demonstrates that the DCE-MRI with the extended Tofts linear model can measure the permeability and perfusion characteristic in MS lesions and in NAWM regions. The K trans , Ve, Vp, CBF, and CBV of CE lesions were significantly higher than that of NE lesions. The skewness of Ve value in CE lesions was more close to normal distribution, indicating that the histogram can be helpful to distinguish the pathology of MS lesions.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Koerner, Tess K.; Zhang, Yang
2017-01-01
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers. PMID:28264422
Quantitative model of diffuse speckle contrast analysis for flow measurement.
Liu, Jialin; Zhang, Hongchao; Lu, Jian; Ni, Xiaowu; Shen, Zhonghua
2017-07-01
Diffuse speckle contrast analysis (DSCA) is a noninvasive optical technique capable of monitoring deep tissue blood flow. However, a detailed study of the speckle contrast model for DSCA has yet to be presented. We deduced the theoretical relationship between speckle contrast and exposure time and further simplified it to a linear approximation model. The feasibility of this linear model was validated by the liquid phantoms which demonstrated that the slope of this linear approximation was able to rapidly determine the Brownian diffusion coefficient of the turbid media at multiple distances using multiexposure speckle imaging. Furthermore, we have theoretically quantified the influence of optical property on the measurements of the Brownian diffusion coefficient which was a consequence of the fact that the slope of this linear approximation was demonstrated to be equal to the inverse of correlation time of the speckle.
Anumol, Tarun; Sgroi, Massimiliano; Park, Minkyu; Roccaro, Paolo; Snyder, Shane A
2015-06-01
This study investigated the applicability of bulk organic parameters like dissolved organic carbon (DOC), UV absorbance at 254 nm (UV254), and total fluorescence (TF) to act as surrogates in predicting trace organic compound (TOrC) removal by granular activated carbon in water reuse applications. Using rapid small-scale column testing, empirical linear correlations for thirteen TOrCs were determined with DOC, UV254, and TF in four wastewater effluents. Linear correlations (R(2) > 0.7) were obtained for eight TOrCs in each water quality in the UV254 model, while ten TOrCs had R(2) > 0.7 in the TF model. Conversely, DOC was shown to be a poor surrogate for TOrC breakthrough prediction. When the data from all four water qualities was combined, good linear correlations were still obtained with TF having higher R(2) than UV254 especially for TOrCs with log Dow>1. Excellent linear relationship (R(2) > 0.9) between log Dow and the removal of TOrC at 0% surrogate removal (y-intercept) were obtained for the five neutral TOrCs tested in this study. Positively charged TOrCs had enhanced removals due to electrostatic interactions with negatively charged GAC that caused them to deviate from removals that would be expected with their log Dow. Application of the empirical linear correlation models to full-scale samples provided good results for six of seven TOrCs (except meprobamate) tested when comparing predicted TOrC removal by UV254 and TF with actual removals for GAC in all the five samples tested. Surrogate predictions using UV254 and TF provide valuable tools for rapid or on-line monitoring of GAC performance and can result in cost savings by extended GAC run times as compared to using DOC breakthrough to trigger regeneration or replacement. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo strokemore » model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.« less
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
A Linear Model of Phase-Dependent Power Correlations in Neuronal Oscillations
Eriksson, David; Vicente, Raul; Schmidt, Kerstin
2011-01-01
Recently, it has been suggested that effective interactions between two neuronal populations are supported by the phase difference between the oscillations in these two populations, a hypothesis referred to as “communication through coherence” (CTC). Experimental work quantified effective interactions by means of the power correlations between the two populations, where power was calculated on the local field potential and/or multi-unit activity. Here, we present a linear model of interacting oscillators that accounts for the phase dependency of the power correlation between the two populations and that can be used as a reference for detecting non-linearities such as gain control. In the experimental analysis, trials were sorted according to the coupled phase difference of the oscillators while the putative interaction between oscillations was taking place. Taking advantage of the modeling, we further studied the dependency of the power correlation on the uncoupled phase difference, connection strength, and topology. Since the uncoupled phase difference, i.e., the phase relation before the effective interaction, is the causal variable in the CTC hypothesis we also describe how power correlations depend on that variable. For uni-directional connectivity we observe that the width of the uncoupled phase dependency is broader than for the coupled phase. Furthermore, the analytical results show that the characteristics of the phase dependency change when a bidirectional connection is assumed. The width of the phase dependency indicates which oscillation frequencies are optimal for a given connection delay distribution. We propose that a certain width enables a stimulus-contrast dependent extent of effective long-range lateral connections. PMID:21808618
Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D
2016-05-01
Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.
Effective Perron-Frobenius eigenvalue for a correlated random map
NASA Astrophysics Data System (ADS)
Pool, Roman R.; Cáceres, Manuel O.
2010-09-01
We investigate the evolution of random positive linear maps with various type of disorder by analytic perturbation and direct simulation. Our theoretical result indicates that the statistics of a random linear map can be successfully described for long time by the mean-value vector state. The growth rate can be characterized by an effective Perron-Frobenius eigenvalue that strongly depends on the type of correlation between the elements of the projection matrix. We apply this approach to an age-structured population dynamics model. We show that the asymptotic mean-value vector state characterizes the population growth rate when the age-structured model has random vital parameters. In this case our approach reveals the nontrivial dependence of the effective growth rate with cross correlations. The problem was reduced to the calculation of the smallest positive root of a secular polynomial, which can be obtained by perturbations in terms of Green’s function diagrammatic technique built with noncommutative cumulants for arbitrary n -point correlations.
Experimental demonstration of nonbilocal quantum correlations.
Saunders, Dylan J; Bennet, Adam J; Branciard, Cyril; Pryde, Geoff J
2017-04-01
Quantum mechanics admits correlations that cannot be explained by local realistic models. The most studied models are the standard local hidden variable models, which satisfy the well-known Bell inequalities. To date, most works have focused on bipartite entangled systems. We consider correlations between three parties connected via two independent entangled states. We investigate the new type of so-called "bilocal" models, which correspondingly involve two independent hidden variables. These models describe scenarios that naturally arise in quantum networks, where several independent entanglement sources are used. Using photonic qubits, we build such a linear three-node quantum network and demonstrate nonbilocal correlations by violating a Bell-like inequality tailored for bilocal models. Furthermore, we show that the demonstration of nonbilocality is more noise-tolerant than that of standard Bell nonlocality in our three-party quantum network.
Campos, Rafael Viegas; Cobuci, Jaime Araujo; Kern, Elisandra Lurdes; Costa, Cláudio Napolis; McManus, Concepta Margaret
2015-04-01
The objective of this study was to estimate genetic and phenotypic parameters for linear type traits, as well as milk yield (MY), fat yield (FY) and protein yield (PY) in 18,831 Holstein cows reared in 495 herds in Brazil. Restricted maximum likelihood with a bivariate model was used for estimation genetic parameters, including fixed effects of herd-year of classification, period of classification, classifier and stage of lactation for linear type traits and herd-year of calving, season of calving and lactation order effects for production traits. The age of cow at calving was fitted as a covariate (with linear and quadratic terms), common to both models. Heritability estimates varied from 0.09 to 0.38 for linear type traits and from 0.17 to 0.24 for production traits, indicating sufficient genetic variability to achieve genetic gain through selection. In general, estimates of genetic correlations between type and production traits were low, except for udder texture and angularity that showed positive genetic correlations (>0.29) with MY, FY, and PY. Udder depth had the highest negative genetic correlation (-0.30) with production traits. Selection for final score, commonly used by farmers as a practical selection tool to improve type traits, does not lead to significant improvements in production traits, thus the use of selection indices that consider both sets of traits (production and type) seems to be the most adequate to carry out genetic selection of animals in the Brazilian herd.
Campos, Rafael Viegas; Cobuci, Jaime Araujo; Kern, Elisandra Lurdes; Costa, Cláudio Napolis; McManus, Concepta Margaret
2015-01-01
The objective of this study was to estimate genetic and phenotypic parameters for linear type traits, as well as milk yield (MY), fat yield (FY) and protein yield (PY) in 18,831 Holstein cows reared in 495 herds in Brazil. Restricted maximum likelihood with a bivariate model was used for estimation genetic parameters, including fixed effects of herd-year of classification, period of classification, classifier and stage of lactation for linear type traits and herd-year of calving, season of calving and lactation order effects for production traits. The age of cow at calving was fitted as a covariate (with linear and quadratic terms), common to both models. Heritability estimates varied from 0.09 to 0.38 for linear type traits and from 0.17 to 0.24 for production traits, indicating sufficient genetic variability to achieve genetic gain through selection. In general, estimates of genetic correlations between type and production traits were low, except for udder texture and angularity that showed positive genetic correlations (>0.29) with MY, FY, and PY. Udder depth had the highest negative genetic correlation (−0.30) with production traits. Selection for final score, commonly used by farmers as a practical selection tool to improve type traits, does not lead to significant improvements in production traits, thus the use of selection indices that consider both sets of traits (production and type) seems to be the most adequate to carry out genetic selection of animals in the Brazilian herd. PMID:25656190
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu
Conventional semi-infinite analytical solutions of correlation diffusion equation may lead to errors when calculating blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements in tissues with irregular geometries. Very recently, we created an algorithm integrating a Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in homogenous tissues with arbitrary geometries for extraction of BFI (i.e., αD{sub B}). The purpose of this study is to extend the capability of the Nth-order linear algorithm for extracting BFI in heterogeneous tissues with arbitrary geometries. The previous linear algorithm was modified to extract BFIs in different typesmore » of tissues simultaneously through utilizing DCS data at multiple source-detector separations. We compared the proposed linear algorithm with the semi-infinite homogenous solution in a computer model of adult head with heterogeneous tissue layers of scalp, skull, cerebrospinal fluid, and brain. To test the capability of the linear algorithm for extracting relative changes of cerebral blood flow (rCBF) in deep brain, we assigned ten levels of αD{sub B} in the brain layer with a step decrement of 10% while maintaining αD{sub B} values constant in other layers. Simulation results demonstrate the accuracy (errors < 3%) of high-order (N ≥ 5) linear algorithm in extracting BFIs in different tissue layers and rCBF in deep brain. By contrast, the semi-infinite homogenous solution resulted in substantial errors in rCBF (34.5% ≤ errors ≤ 60.2%) and BFIs in different layers. The Nth-order linear model simplifies data analysis, thus allowing for online data processing and displaying. Future study will test this linear algorithm in heterogeneous tissues with different levels of blood flow variations and noises.« less
Inferring gene regression networks with model trees
2010-01-01
Background Novel strategies are required in order to handle the huge amount of data produced by microarray technologies. To infer gene regulatory networks, the first step is to find direct regulatory relationships between genes building the so-called gene co-expression networks. They are typically generated using correlation statistics as pairwise similarity measures. Correlation-based methods are very useful in order to determine whether two genes have a strong global similarity but do not detect local similarities. Results We propose model trees as a method to identify gene interaction networks. While correlation-based methods analyze each pair of genes, in our approach we generate a single regression tree for each gene from the remaining genes. Finally, a graph from all the relationships among output and input genes is built taking into account whether the pair of genes is statistically significant. For this reason we apply a statistical procedure to control the false discovery rate. The performance of our approach, named REGNET, is experimentally tested on two well-known data sets: Saccharomyces Cerevisiae and E.coli data set. First, the biological coherence of the results are tested. Second the E.coli transcriptional network (in the Regulon database) is used as control to compare the results to that of a correlation-based method. This experiment shows that REGNET performs more accurately at detecting true gene associations than the Pearson and Spearman zeroth and first-order correlation-based methods. Conclusions REGNET generates gene association networks from gene expression data, and differs from correlation-based methods in that the relationship between one gene and others is calculated simultaneously. Model trees are very useful techniques to estimate the numerical values for the target genes by linear regression functions. They are very often more precise than linear regression models because they can add just different linear regressions to separate areas of the search space favoring to infer localized similarities over a more global similarity. Furthermore, experimental results show the good performance of REGNET. PMID:20950452
NASA Astrophysics Data System (ADS)
Hapugoda, J. C.; Sooriyarachchi, M. R.
2017-09-01
Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.
Kreula, J. M.; Clark, S. R.; Jaksch, D.
2016-01-01
We propose a non-linear, hybrid quantum-classical scheme for simulating non-equilibrium dynamics of strongly correlated fermions described by the Hubbard model in a Bethe lattice in the thermodynamic limit. Our scheme implements non-equilibrium dynamical mean field theory (DMFT) and uses a digital quantum simulator to solve a quantum impurity problem whose parameters are iterated to self-consistency via a classically computed feedback loop where quantum gate errors can be partly accounted for. We analyse the performance of the scheme in an example case. PMID:27609673
Fujisawa, Seiichiro; Kadoma, Yoshinori
2012-01-01
We investigated the quantitative structure-activity relationships between hemolytic activity (log 1/H(50)) or in vivo mouse intraperitoneal (ip) LD(50) using reported data for α,β-unsaturated carbonyl compounds such as (meth)acrylate monomers and their (13)C-NMR β-carbon chemical shift (δ). The log 1/H(50) value for methacrylates was linearly correlated with the δC(β) value. That for (meth)acrylates was linearly correlated with log P, an index of lipophilicity. The ipLD(50) for (meth)acrylates was linearly correlated with δC(β) but not with log P. For (meth)acrylates, the δC(β) value, which is dependent on the π-electron density on the β-carbon, was linearly correlated with PM3-based theoretical parameters (chemical hardness, η; electronegativity, χ; electrophilicity, ω), whereas log P was linearly correlated with heat of formation (HF). Also, the interaction between (meth)acrylates and DPPC liposomes in cell membrane molecular models was investigated using (1)H-NMR spectroscopy and differential scanning calorimetry (DSC). The log 1/H(50) value was related to the difference in chemical shift (ΔδHa) (Ha: H (trans) attached to the β-carbon) between the free monomer and the DPPC liposome-bound monomer. Monomer-induced DSC phase transition properties were related to HF for monomers. NMR chemical shifts may represent a valuable parameter for investigating the biological mechanisms of action of (meth)acrylates.
Fujisawa, Seiichiro; Kadoma, Yoshinori
2012-01-01
We investigated the quantitative structure-activity relationships between hemolytic activity (log 1/H50) or in vivo mouse intraperitoneal (ip) LD50 using reported data for α,β-unsaturated carbonyl compounds such as (meth)acrylate monomers and their 13C-NMR β-carbon chemical shift (δ). The log 1/H50 value for methacrylates was linearly correlated with the δCβ value. That for (meth)acrylates was linearly correlated with log P, an index of lipophilicity. The ipLD50 for (meth)acrylates was linearly correlated with δCβ but not with log P. For (meth)acrylates, the δCβ value, which is dependent on the π-electron density on the β-carbon, was linearly correlated with PM3-based theoretical parameters (chemical hardness, η; electronegativity, χ; electrophilicity, ω), whereas log P was linearly correlated with heat of formation (HF). Also, the interaction between (meth)acrylates and DPPC liposomes in cell membrane molecular models was investigated using 1H-NMR spectroscopy and differential scanning calorimetry (DSC). The log 1/H50 value was related to the difference in chemical shift (ΔδHa) (Ha: H (trans) attached to the β-carbon) between the free monomer and the DPPC liposome-bound monomer. Monomer-induced DSC phase transition properties were related to HF for monomers. NMR chemical shifts may represent a valuable parameter for investigating the biological mechanisms of action of (meth)acrylates. PMID:22312284
Reliability measures in item response theory: manifest versus latent correlation functions.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Verbeke, Geert; De Boeck, Paul
2015-02-01
For item response theory (IRT) models, which belong to the class of generalized linear or non-linear mixed models, reliability at the scale of observed scores (i.e., manifest correlation) is more difficult to calculate than latent correlation based reliability, but usually of greater scientific interest. This is not least because it cannot be calculated explicitly when the logit link is used in conjunction with normal random effects. As such, approximations such as Fisher's information coefficient, Cronbach's α, or the latent correlation are calculated, allegedly because it is easy to do so. Cronbach's α has well-known and serious drawbacks, Fisher's information is not meaningful under certain circumstances, and there is an important but often overlooked difference between latent and manifest correlations. Here, manifest correlation refers to correlation between observed scores, while latent correlation refers to correlation between scores at the latent (e.g., logit or probit) scale. Thus, using one in place of the other can lead to erroneous conclusions. Taylor series based reliability measures, which are based on manifest correlation functions, are derived and a careful comparison of reliability measures based on latent correlations, Fisher's information, and exact reliability is carried out. The latent correlations are virtually always considerably higher than their manifest counterparts, Fisher's information measure shows no coherent behaviour (it is even negative in some cases), while the newly introduced Taylor series based approximations reflect the exact reliability very closely. Comparisons among the various types of correlations, for various IRT models, are made using algebraic expressions, Monte Carlo simulations, and data analysis. Given the light computational burden and the performance of Taylor series based reliability measures, their use is recommended. © 2014 The British Psychological Society.
Lee, Kyung Hee; Kang, Seung Kwan; Goo, Jin Mo; Lee, Jae Sung; Cheon, Gi Jeong; Seo, Seongho; Hwang, Eui Jin
2017-03-01
To compare the relationship between K trans from DCE-MRI and K 1 from dynamic 13 N-NH 3 -PET, with simultaneous and separate MR/PET in the VX-2 rabbit carcinoma model. MR/PET was performed simultaneously and separately, 14 and 15 days after VX-2 tumor implantation at the paravertebral muscle. The K trans and K 1 values were estimated using an in-house software program. The relationships between K trans and K 1 were analyzed using Pearson's correlation coefficients and linear/non-linear regression function. Assuming a linear relationship, K trans and K 1 exhibited a moderate positive correlations with both simultaneous (r=0.54-0.57) and separate (r=0.53-0.69) imaging. However, while the K trans and K 1 from separate imaging were linearly correlated, those from simultaneous imaging exhibited a non-linear relationship. The amount of change in K 1 associated with a unit increase in K trans varied depending on K trans values. The relationship between K trans and K 1 may be mis-interpreted with separate MR and PET acquisition. Copyright© 2017, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.
Modeling turbidity and flow at daily steps in karst using ARIMA/ARFIMA-GARCH error models
NASA Astrophysics Data System (ADS)
Massei, N.
2013-12-01
Hydrological and physico-chemical variations recorded at karst springs usually reflect highly non-linear processes and the corresponding time series are then very often also highly non-linear. Among others, turbidity, as an important parameter regarding water quality and management, is a very complex response of karst systems to rain events, involving direct transfer of particles from point-source recharge as well as resuspension of particles previously deposited and stored within the system. For those reasons, turbidity modeling has not been well taken in karst hydrological models so far. Most of the time, the modeling approaches would involve stochastic linear models such ARIMA-type models and their derivatives (ARMA, ARMAX, ARIMAX, ARFIMA...). Yet, linear models usually fail to represent well the whole (stochastic) process variability, and their residuals still contain useful information that can be used to either understand the whole variability or to enhance short-term predictability and forecasting. Model residuals are actually not i.i.d., which can be identified by the fact that squared residuals still present clear and significant serial correlation. Indeed, high (low) amplitudes are followed in time by high (low) amplitudes, which can be seen on residuals time series as periods of time during which amplitudes are higher (lower) then the mean amplitude. This is known as the ARCH effet (AutoRegressive Conditional Heteroskedasticity), and the corresponding non-linear process affecting residuals of a linear model can be modeled using ARCH or generalized ARCH (GARCH) non-linear modeling, which approaches are very well known in econometrics. Here we investigated the capability of ARIMA-GARCH error models to represent a ~20-yr daily turbidity time series recorded at a karst spring used for water supply of the city of Le Havre (Upper Normandy, France). ARIMA and ARFIMA models were used to represent the mean behavior of the time series and the residuals clearly appeared to present a pronounced ARCH effect, as confirmed by Ljung-Box and McLeod-Li tests. We then identified and fitted GARCH models to the residuals of ARIMA and ARFIMA models in order to model the conditional variance and volatility of the turbidity time series. The results eventually showed that serial correlation was succesfully removed in the last standardized residuals of the GARCH model, and hence that the ARIMA-GARCH error model appeared consistent for modeling such time series. The approach finally improved short-term (e.g a few steps-ahead) turbidity forecasting.
Bivariate categorical data analysis using normal linear conditional multinomial probability model.
Sun, Bingrui; Sutradhar, Brajendra
2015-02-10
Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Abunama, Taher; Othman, Faridah
2017-06-01
Analysing the fluctuations of wastewater inflow rates in sewage treatment plants (STPs) is essential to guarantee a sufficient treatment of wastewater before discharging it to the environment. The main objectives of this study are to statistically analyze and forecast the wastewater inflow rates into the Bandar Tun Razak STP in Kuala Lumpur, Malaysia. A time series analysis of three years’ weekly influent data (156weeks) has been conducted using the Auto-Regressive Integrated Moving Average (ARIMA) model. Various combinations of ARIMA orders (p, d, q) have been tried to select the most fitted model, which was utilized to forecast the wastewater inflow rates. The linear regression analysis was applied to testify the correlation between the observed and predicted influents. ARIMA (3, 1, 3) model was selected with the highest significance R-square and lowest normalized Bayesian Information Criterion (BIC) value, and accordingly the wastewater inflow rates were forecasted to additional 52weeks. The linear regression analysis between the observed and predicted values of the wastewater inflow rates showed a positive linear correlation with a coefficient of 0.831.
Chen, Jinsong; Liu, Lei; Shih, Ya-Chen T; Zhang, Daowen; Severini, Thomas A
2016-03-15
We propose a flexible model for correlated medical cost data with several appealing features. First, the mean function is partially linear. Second, the distributional form for the response is not specified. Third, the covariance structure of correlated medical costs has a semiparametric form. We use extended generalized estimating equations to simultaneously estimate all parameters of interest. B-splines are used to estimate unknown functions, and a modification to Akaike information criterion is proposed for selecting knots in spline bases. We apply the model to correlated medical costs in the Medical Expenditure Panel Survey dataset. Simulation studies are conducted to assess the performance of our method. Copyright © 2015 John Wiley & Sons, Ltd.
Multilevel Correlates of Childhood Physical Aggression and Prosocial Behavior
ERIC Educational Resources Information Center
Romano, Elisa; Tremblay, Richard E.; Boulerice, Bernard; Swisher, Raymond
2005-01-01
The study identified independent individual, family, and neighborhood correlates of children's physical aggression and prosocial behavior. Participants were 2,745-11-year olds nested in 1,982 families, which were themselves nested in 96 Canadian neighborhoods. Hierarchical linear modeling showed that the total variation explained by the…
Yang, James J; Williams, L Keoki; Buu, Anne
2017-08-24
A multivariate genome-wide association test is proposed for analyzing data on multivariate quantitative phenotypes collected from related subjects. The proposed method is a two-step approach. The first step models the association between the genotype and marginal phenotype using a linear mixed model. The second step uses the correlation between residuals of the linear mixed model to estimate the null distribution of the Fisher combination test statistic. The simulation results show that the proposed method controls the type I error rate and is more powerful than the marginal tests across different population structures (admixed or non-admixed) and relatedness (related or independent). The statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that applying the multivariate association test may facilitate identification of the pleiotropic genes contributing to the risk for alcohol dependence commonly expressed by four correlated phenotypes. This study proposes a multivariate method for identifying pleiotropic genes while adjusting for cryptic relatedness and population structure between subjects. The two-step approach is not only powerful but also computationally efficient even when the number of subjects and the number of phenotypes are both very large.
Abrecht, David G; Schwantes, Jon M
2015-03-03
This paper extends the preliminary linear free energy correlations for radionuclide release performed by Schwantes et al., following the Fukushima-Daiichi Nuclear Power Plant accident. Through evaluations of the molar fractionations of radionuclides deposited in the soil relative to modeled radionuclide inventories, we confirm the initial source of the radionuclides to the environment to be from active reactors rather than the spent fuel pool. Linear correlations of the form In χ = −α ((ΔGrxn°(TC))/(RTC)) + β were obtained between the deposited concentrations, and the reduction potentials of the fission product oxide species using multiple reduction schemes to calculate ΔG°rxn (TC). These models allowed an estimate of the upper bound for the reactor temperatures of TC between 2015 and 2060 K, providing insight into the limiting factors to vaporization and release of fission products during the reactor accident. Estimates of the release of medium-lived fission products 90Sr, 121mSn, 147Pm, 144Ce, 152Eu, 154Eu, 155Eu, and 151Sm through atmospheric venting during the first month following the accident were obtained, indicating that large quantities of 90Sr and radioactive lanthanides were likely to remain in the damaged reactor cores.
Experimental demonstration of nonbilocal quantum correlations
Saunders, Dylan J.; Bennet, Adam J.; Branciard, Cyril; Pryde, Geoff J.
2017-01-01
Quantum mechanics admits correlations that cannot be explained by local realistic models. The most studied models are the standard local hidden variable models, which satisfy the well-known Bell inequalities. To date, most works have focused on bipartite entangled systems. We consider correlations between three parties connected via two independent entangled states. We investigate the new type of so-called “bilocal” models, which correspondingly involve two independent hidden variables. These models describe scenarios that naturally arise in quantum networks, where several independent entanglement sources are used. Using photonic qubits, we build such a linear three-node quantum network and demonstrate nonbilocal correlations by violating a Bell-like inequality tailored for bilocal models. Furthermore, we show that the demonstration of nonbilocality is more noise-tolerant than that of standard Bell nonlocality in our three-party quantum network. PMID:28508045
Testing the consistency of three-point halo clustering in Fourier and configuration space
NASA Astrophysics Data System (ADS)
Hoffmann, K.; Gaztañaga, E.; Scoccimarro, R.; Crocce, M.
2018-05-01
We compare reduced three-point correlations Q of matter, haloes (as proxies for galaxies) and their cross-correlations, measured in a total simulated volume of ˜100 (h-1 Gpc)3, to predictions from leading order perturbation theory on a large range of scales in configuration space. Predictions for haloes are based on the non-local bias model, employing linear (b1) and non-linear (c2, g2) bias parameters, which have been constrained previously from the bispectrum in Fourier space. We also study predictions from two other bias models, one local (g2 = 0) and one in which c2 and g2 are determined by b1 via approximately universal relations. Overall, measurements and predictions agree when Q is derived for triangles with (r1r2r3)1/3 ≳60 h-1 Mpc, where r1 - 3 are the sizes of the triangle legs. Predictions for Qmatter, based on the linear power spectrum, show significant deviations from the measurements at the BAO scale (given our small measurement errors), which strongly decrease when adding a damping term or using the non-linear power spectrum, as expected. Predictions for Qhalo agree best with measurements at large scales when considering non-local contributions. The universal bias model works well for haloes and might therefore be also useful for tightening constraints on b1 from Q in galaxy surveys. Such constraints are independent of the amplitude of matter density fluctuation (σ8) and hence break the degeneracy between b1 and σ8, present in galaxy two-point correlations.
Wavelet regression model in forecasting crude oil price
NASA Astrophysics Data System (ADS)
Hamid, Mohd Helmie; Shabri, Ani
2017-05-01
This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.
NASA Technical Reports Server (NTRS)
Pinho, Silvestre T.; Davila, C. G.; Camanho, P. P.; Iannucci, L.; Robinson, P.
2005-01-01
A set of three-dimensional failure criteria for laminated fiber-reinforced composites, denoted LaRC04, is proposed. The criteria are based on physical models for each failure mode and take into consideration non-linear matrix shear behaviour. The model for matrix compressive failure is based on the Mohr-Coulomb criterion and it predicts the fracture angle. Fiber kinking is triggered by an initial fiber misalignment angle and by the rotation of the fibers during compressive loading. The plane of fiber kinking is predicted by the model. LaRC04 consists of 6 expressions that can be used directly for design purposes. Several applications involving a broad range of load combinations are presented and compared to experimental data and other existing criteria. Predictions using LaRC04 correlate well with the experimental data, arguably better than most existing criteria. The good correlation seems to be attributable to the physical soundness of the underlying failure models.
Linear model for fast background subtraction in oligonucleotide microarrays.
Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico
2009-11-16
One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.
Li, Zhenghua; Cheng, Fansheng; Xia, Zhining
2011-01-01
The chemical structures of 114 polycyclic aromatic sulfur heterocycles (PASHs) have been studied by molecular electronegativity-distance vector (MEDV). The linear relationships between gas chromatographic retention index and the MEDV have been established by a multiple linear regression (MLR) model. The results of variable selection by stepwise multiple regression (SMR) and the powerful predictive abilities of the optimization model appraised by leave-one-out cross-validation showed that the optimization model with the correlation coefficient (R) of 0.994 7 and the cross-validated correlation coefficient (Rcv) of 0.994 0 possessed the best statistical quality. Furthermore, when the 114 PASHs compounds were divided into calibration and test sets in the ratio of 2:1, the statistical analysis showed our models possesses almost equal statistical quality, the very similar regression coefficients and the good robustness. The quantitative structure-retention relationship (QSRR) model established may provide a convenient and powerful method for predicting the gas chromatographic retention of PASHs.
Predicting musically induced emotions from physiological inputs: linear and neural network models.
Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M
2013-01-01
Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.
Effects of linear trends on estimation of noise in GNSS position time-series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dmitrieva, K.; Segall, P.; Bradley, A. M.
A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this study, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that themore » effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Finally, overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.« less
Vascular mechanics of the coronary artery
NASA Technical Reports Server (NTRS)
Veress, A. I.; Vince, D. G.; Anderson, P. M.; Cornhill, J. F.; Herderick, E. E.; Klingensmith, J. D.; Kuban, B. D.; Greenberg, N. L.; Thomas, J. D.
2000-01-01
This paper describes our research into the vascular mechanics of the coronary artery and plaque. The three sections describe the determination of arterial mechanical properties using intravascular ultrasound (IVUS), a constitutive relation for the arterial wall, and finite element method (FEM) models of the arterial wall and atheroma. METHODS: Inflation testing of porcine left anterior descending coronary arteries was conducted. The changes in the vessel geometry were monitored using IVUS, and intracoronary pressure was recorded using a pressure transducer. The creep and quasistatic stress/strain responses were determined. A Standard Linear Solid (SLS) was modified to reproduce the non-linear elastic behavior of the arterial wall. This Standard Non-linear Solid (SNS) was implemented into an axisymetric thick-walled cylinder numerical model. Finite element analysis models were created for five age groups and four levels of stenosis using the Pathobiological Determinants of Atherosclerosis Youth (PDAY) database. RESULTS: The arteries exhibited non-linear elastic behavior. The total tissue creep strain was epsilon creep = 0.082 +/- 0.018 mm/mm. The numerical model could reproduce both the non-linearity of the porcine data and time dependent behavior of the arterial wall found in the literature with a correlation coefficient of 0.985. Increasing age had a strong positive correlation with the shoulder stress level, (r = 0.95). The 30% stenosis had the highest shoulder stress due to the combination of a fully formed lipid pool and a thin cap. CONCLUSIONS: Studying the solid mechanics of the arterial wall and the atheroma provide important insights into the mechanisms involved in plaque rupture.
Effects of linear trends on estimation of noise in GNSS position time-series
NASA Astrophysics Data System (ADS)
Dmitrieva, K.; Segall, P.; Bradley, A. M.
2017-01-01
A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this paper, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that the effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.
Effects of linear trends on estimation of noise in GNSS position time-series
Dmitrieva, K.; Segall, P.; Bradley, A. M.
2016-10-20
A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this study, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that themore » effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Finally, overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.« less
Desriac, N; Postollec, F; Coroller, L; Sohier, D; Abee, T; den Besten, H M W
2013-10-01
Exposure to mild stress conditions can activate stress adaptation mechanisms and provide cross-resistance towards otherwise lethal stresses. In this study, an approach was followed to select molecular biomarkers (quantitative gene expressions) to predict induced acid resistance after exposure to various mild stresses, i.e. exposure to sublethal concentrations of salt, acid and hydrogen peroxide during 5 min to 60 min. Gene expression patterns of unstressed and mildly stressed cells of Bacillus weihenstephanensis were correlated to their acid resistance (3D value) which was estimated after exposure to lethal acid conditions. Among the twenty-nine candidate biomarkers, 12 genes showed expression patterns that were correlated either linearly or non-linearly to acid resistance, while for the 17 other genes the correlation remains to be determined. The selected genes represented two types of biomarkers, (i) four direct biomarker genes (lexA, spxA, narL, bkdR) for which expression patterns upon mild stress treatment were linearly correlated to induced acid resistance; and (ii) nine long-acting biomarker genes (spxA, BcerKBAB4_0325, katA, trxB, codY, lacI, BcerKBAB4_1716, BcerKBAB4_2108, relA) which were transiently up-regulated during mild stress exposure and correlated to increased acid resistance over time. Our results highlight that mild stress induced transcripts can be linearly or non-linearly correlated to induced acid resistance and both approaches can be used to find relevant biomarkers. This quantitative and systematic approach opens avenues to select cellular biomarkers that could be incremented in mathematical models to predict microbial behaviour. Copyright © 2013 Elsevier B.V. All rights reserved.
Cosmological Constraints from Fourier Phase Statistics
NASA Astrophysics Data System (ADS)
Ali, Kamran; Obreschkow, Danail; Howlett, Cullan; Bonvin, Camille; Llinares, Claudio; Oliveira Franco, Felipe; Power, Chris
2018-06-01
Most statistical inference from cosmic large-scale structure relies on two-point statistics, i.e. on the galaxy-galaxy correlation function (2PCF) or the power spectrum. These statistics capture the full information encoded in the Fourier amplitudes of the galaxy density field but do not describe the Fourier phases of the field. Here, we quantify the information contained in the line correlation function (LCF), a three-point Fourier phase correlation function. Using cosmological simulations, we estimate the Fisher information (at redshift z = 0) of the 2PCF, LCF and their combination, regarding the cosmological parameters of the standard ΛCDM model, as well as a Warm Dark Matter (WDM) model and the f(R) and Symmetron modified gravity models. The galaxy bias is accounted for at the level of a linear bias. The relative information of the 2PCF and the LCF depends on the survey volume, sampling density (shot noise) and the bias uncertainty. For a volume of 1h^{-3}Gpc^3, sampled with points of mean density \\bar{n} = 2× 10^{-3} h3 Mpc^{-3} and a bias uncertainty of 13%, the LCF improves the parameter constraints by about 20% in the ΛCDM cosmology and potentially even more in alternative models. Finally, since a linear bias only affects the Fourier amplitudes (2PCF), but not the phases (LCF), the combination of the 2PCF and the LCF can be used to break the degeneracy between the linear bias and σ8, present in 2-point statistics.
NASA Technical Reports Server (NTRS)
Choudhury, B. J.; Owe, M.; Ormsby, J. P.; Chang, A. T. C.; Wang, J. R.; Goward, S. N.; Golus, R. E.
1987-01-01
Spatial and temporal variabilities of microwave brightness temperature over the U.S. Southern Great Plains are quantified in terms of vegetation and soil wetness. The brightness temperatures (TB) are the daytime observations from April to October for five years (1979 to 1983) obtained by the Nimbus-7 Scanning Multichannel Microwave Radiometer at 6.6 GHz frequency, horizontal polarization. The spatial and temporal variabilities of vegetation are assessed using visible and near-infrared observations by the NOAA-7 Advanced Very High Resolution Radiometer (AVHRR), while an Antecedent Precipitation Index (API) model is used for soil wetness. The API model was able to account for more than 50 percent of the observed variability in TB, although linear correlations between TB and API were generally significant at the 1 percent level. The slope of the linear regression between TB and API is found to correlate linearly with an index for vegetation density derived from AVHRR data.
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2018-01-01
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591
Previous modelling of the median lethal dose (oral rat LD50) has indicated that local class-based models yield better correlations than global models. We evaluated the hypothesis that dividing the dataset by pesticidal mechanisms would improve prediction accuracy. A linear discri...
What Can Causal Networks Tell Us about Metabolic Pathways?
Blair, Rachael Hageman; Kliebenstein, Daniel J.; Churchill, Gary A.
2012-01-01
Graphical models describe the linear correlation structure of data and have been used to establish causal relationships among phenotypes in genetic mapping populations. Data are typically collected at a single point in time. Biological processes on the other hand are often non-linear and display time varying dynamics. The extent to which graphical models can recapitulate the architecture of an underlying biological processes is not well understood. We consider metabolic networks with known stoichiometry to address the fundamental question: “What can causal networks tell us about metabolic pathways?”. Using data from an Arabidopsis BaySha population and simulated data from dynamic models of pathway motifs, we assess our ability to reconstruct metabolic pathways using graphical models. Our results highlight the necessity of non-genetic residual biological variation for reliable inference. Recovery of the ordering within a pathway is possible, but should not be expected. Causal inference is sensitive to subtle patterns in the correlation structure that may be driven by a variety of factors, which may not emphasize the substrate-product relationship. We illustrate the effects of metabolic pathway architecture, epistasis and stochastic variation on correlation structure and graphical model-derived networks. We conclude that graphical models should be interpreted cautiously, especially if the implied causal relationships are to be used in the design of intervention strategies. PMID:22496633
NASA Astrophysics Data System (ADS)
Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest
2017-12-01
The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.
NASA Astrophysics Data System (ADS)
Angulo, Raul E.; Hilbert, Stefan
2015-03-01
We explore the cosmological constraints from cosmic shear using a new way of modelling the non-linear matter correlation functions. The new formalism extends the method of Angulo & White, which manipulates outputs of N-body simulations to represent the 3D non-linear mass distribution in different cosmological scenarios. We show that predictions from our approach for shear two-point correlations at 1-300 arcmin separations are accurate at the ˜10 per cent level, even for extreme changes in cosmology. For moderate changes, with target cosmologies similar to that preferred by analyses of recent Planck data, the accuracy is close to ˜5 per cent. We combine this approach with a Monte Carlo Markov chain sampler to explore constraints on a Λ cold dark matter model from the shear correlation functions measured in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We obtain constraints on the parameter combination σ8(Ωm/0.27)0.6 = 0.801 ± 0.028. Combined with results from cosmic microwave background data, we obtain marginalized constraints on σ8 = 0.81 ± 0.01 and Ωm = 0.29 ± 0.01. These results are statistically compatible with previous analyses, which supports the validity of our approach. We discuss the advantages of our method and the potential it offers, including a path to model in detail (i) the effects of baryons, (ii) high-order shear correlation functions, and (iii) galaxy-galaxy lensing, among others, in future high-precision cosmological analyses.
The Effect of Sample Size on Parametric and Nonparametric Factor Analytical Methods
ERIC Educational Resources Information Center
Kalkan, Ömür Kaya; Kelecioglu, Hülya
2016-01-01
Linear factor analysis models used to examine constructs underlying the responses are not very suitable for dichotomous or polytomous response formats. The associated problems cannot be eliminated by polychoric or tetrachoric correlations in place of the Pearson correlation. Therefore, we considered parameters obtained from the NOHARM and FACTOR…
Study of Reaction Forces in a Single Sided Linear Induction Motor (SLIM)
DOT National Transportation Integrated Search
1974-01-01
SLIM reaction forces were measured on a laboratory model having aluminum and aluminum-iron secondaries and the results were correlated with the theoretical forces derived for different idealized SLIM models. The first part of the report discusses wav...
Dual linear structured support vector machine tracking method via scale correlation filter
NASA Astrophysics Data System (ADS)
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
NASA Technical Reports Server (NTRS)
Prescod-Weinstein, Chanda; Afshordi, Niayesh
2011-01-01
Structure formation provides a strong test of any cosmic acceleration model because a successful dark energy model must not inhibit or overpredict the development of observed large-scale structures. Traditional approaches to studies of structure formation in the presence of dark energy or a modified gravity implement a modified Press-Schechter formalism, which relates the linear overdensities to the abundance of dark matter haloes at the same time. We critically examine the universality of the Press-Schechter formalism for different cosmologies, and show that the halo abundance is best correlated with spherical linear overdensity at 94% of collapse (or observation) time. We then extend this argument to ellipsoidal collapse (which decreases the fractional time of best correlation for small haloes), and show that our results agree with deviations from modified Press-Schechter formalism seen in simulated mass functions. This provides a novel universal prescription to measure linear density evolution, based on current and future observations of cluster (or dark matter) halo mass function. In particular, even observations of cluster abundance in a single epoch will constrain the entire history of linear growth of cosmological of perturbations.
NASA Technical Reports Server (NTRS)
Hackert, Eric C.; Busalacchi, Antonio J.
1997-01-01
The goal of this paper is to compare TOPEX/Posaidon (T/P) sea level with sea level results from linear ocean model experiments forced by several different wind products for the tropical Pacific. During the period of this study (October 1992 - October 1995), available wind products include satellite winds from the ERS-1 scatterometer product of [HALP 97] and the passive microwave analysis of SSMI winds produced using the variational analysis method (VAM) of [ATLA 91]. In addition, atmospheric GCM winds from the NCEP reanalysis [KALN 96], ECMWF analysis [ECMW94], and the Goddard EOS-1 (GEOS-1) reanalysis experiment [SCHU 93] are available for comparison. The observed ship wind analysis of FSU [STRI 92] is also included in this study. The linear model of [CANE 84] is used as a transfer function to test the quality of each of these wind products for the tropical Pacific. The various wind products are judged by comparing the wind-forced model sea level results against the T/P sea level anomalies. Correlation and RMS difference maps show how well each wind product does in reproducing the T/P sea level signal. These results are summarized in a table showing area average correlations and RMS differences. The large-scale low-frequency temporal signal is reproduced by all of the wind products, However, significant differences exist in both amplitude and phase on regional scales. In general, the model results forced by satellite winds do a better job reproducing the T/P signal (i.e. have a higher average correlation and lower RMS difference) than the results forced by atmospheric model winds.
Genomic selection for slaughter age in pigs using the Cox frailty model.
Santos, V S; Martins Filho, S; Resende, M D V; Azevedo, C F; Lopes, P S; Guimarães, S E F; Glória, L S; Silva, F F
2015-10-19
The aim of this study was to compare genomic selection methodologies using a linear mixed model and the Cox survival model. We used data from an F2 population of pigs, in which the response variable was the time in days from birth to the culling of the animal and the covariates were 238 markers [237 single nucleotide polymorphism (SNP) plus the halothane gene]. The data were corrected for fixed effects, and the accuracy of the method was determined based on the correlation of the ranks of predicted genomic breeding values (GBVs) in both models with the corrected phenotypic values. The analysis was repeated with a subset of SNP markers with largest absolute effects. The results were in agreement with the GBV prediction and the estimation of marker effects for both models for uncensored data and for normality. However, when considering censored data, the Cox model with a normal random effect (S1) was more appropriate. Since there was no agreement between the linear mixed model and the imputed data (L2) for the prediction of genomic values and the estimation of marker effects, the model S1 was considered superior as it took into account the latent variable and the censored data. Marker selection increased correlations between the ranks of predicted GBVs by the linear and Cox frailty models and the corrected phenotypic values, and 120 markers were required to increase the predictive ability for the characteristic analyzed.
Mathur, Praveen; Sharma, Sarita; Soni, Bhupendra
2010-01-01
In the present work, an attempt is made to formulate multiple regression equations using all possible regressions method for groundwater quality assessment of Ajmer-Pushkar railway line region in pre- and post-monsoon seasons. Correlation studies revealed the existence of linear relationships (r 0.7) for electrical conductivity (EC), total hardness (TH) and total dissolved solids (TDS) with other water quality parameters. The highest correlation was found between EC and TDS (r = 0.973). EC showed highly significant positive correlation with Na, K, Cl, TDS and total solids (TS). TH showed highest correlation with Ca and Mg. TDS showed significant correlation with Na, K, SO4, PO4 and Cl. The study indicated that most of the contamination present was water soluble or ionic in nature. Mg was present as MgCl2; K mainly as KCl and K2SO4, and Na was present as the salts of Cl, SO4 and PO4. On the other hand, F and NO3 showed no significant correlations. The r2 values and F values (at 95% confidence limit, alpha = 0.05) for the modelled equations indicated high degree of linearity among independent and dependent variables. Also the error % between calculated and experimental values was contained within +/- 15% limit.
Understanding Coupling of Global and Diffuse Solar Radiation with Climatic Variability
NASA Astrophysics Data System (ADS)
Hamdan, Lubna
Global solar radiation data is very important for wide variety of applications and scientific studies. However, this data is not readily available because of the cost of measuring equipment and the tedious maintenance and calibration requirements. Wide variety of models have been introduced by researchers to estimate and/or predict the global solar radiations and its components (direct and diffuse radiation) using other readily obtainable atmospheric parameters. The goal of this research is to understand the coupling of global and diffuse solar radiation with climatic variability, by investigating the relationships between these radiations and atmospheric parameters. For this purpose, we applied multilinear regression analysis on the data of National Solar Radiation Database 1991--2010 Update. The analysis showed that the main atmospheric parameters that affect the amount of global radiation received on earth's surface are cloud cover and relative humidity. Global radiation correlates negatively with both variables. Linear models are excellent approximations for the relationship between atmospheric parameters and global radiation. A linear model with the predictors total cloud cover, relative humidity, and extraterrestrial radiation is able to explain around 98% of the variability in global radiation. For diffuse radiation, the analysis showed that the main atmospheric parameters that affect the amount received on earth's surface are cloud cover and aerosol optical depth. Diffuse radiation correlates positively with both variables. Linear models are very good approximations for the relationship between atmospheric parameters and diffuse radiation. A linear model with the predictors total cloud cover, aerosol optical depth, and extraterrestrial radiation is able to explain around 91% of the variability in diffuse radiation. Prediction analysis showed that the linear models we fitted were able to predict diffuse radiation with efficiency of test adjusted R2 values equal to 0.93, using the data of total cloud cover, aerosol optical depth, relative humidity and extraterrestrial radiation. However, for prediction purposes, using nonlinear terms or nonlinear models might enhance the prediction of diffuse radiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Hyekyun
Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation ofmore » the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior projections over more than 60° appear to be necessary for reliable estimations. The mean 3D RMSE during beam delivery after the simple linear model had established with a prior 90° projection data was 0.42 mm for VMAT and 0.45 mm for IMRT. Conclusions: The proposed method does not require any internal/external correlation or statistical modeling to estimate the target trajectory and can be used for both retrospective image-guided radiotherapy with CBCT projection images and real-time target position monitoring for respiratory gating or tracking.« less
NASA Astrophysics Data System (ADS)
Barry, J. H.; Muttalib, K. A.; Tanaka, T.
2008-01-01
We consider a two-dimensional (d=2) kagomé lattice gas model with attractive three-particle interactions around each triangular face of the kagomé lattice. Exact solutions are obtained for multiparticle correlations along the liquid and vapor branches of the coexistence curve and at criticality. The correlation solutions are also determined along the continuation of the curvilinear diameter of the coexistence region into the disordered fluid region. The method generates a linear algebraic system of correlation identities with coefficients dependent only upon the interaction parameter. Using a priori knowledge of pertinent solutions for the density and elementary triplet correlation, one finds a closed and linearly independent set of correlation identities defined upon a spatially compact nine-site cluster of the kagomé lattice. Resulting exact solution curves of the correlations are plotted and discussed as functions of the temperature and are compared with corresponding results in a traditional kagomé lattice gas having nearest-neighbor pair interactions. An example of application for the multiparticle correlations is demonstrated in cavitation theory.
Characterizing multivariate decoding models based on correlated EEG spectral features
McFarland, Dennis J.
2013-01-01
Objective Multivariate decoding methods are popular techniques for analysis of neurophysiological data. The present study explored potential interpretative problems with these techniques when predictors are correlated. Methods Data from sensorimotor rhythm-based cursor control experiments was analyzed offline with linear univariate and multivariate models. Features were derived from autoregressive (AR) spectral analysis of varying model order which produced predictors that varied in their degree of correlation (i.e., multicollinearity). Results The use of multivariate regression models resulted in much better prediction of target position as compared to univariate regression models. However, with lower order AR features interpretation of the spectral patterns of the weights was difficult. This is likely to be due to the high degree of multicollinearity present with lower order AR features. Conclusions Care should be exercised when interpreting the pattern of weights of multivariate models with correlated predictors. Comparison with univariate statistics is advisable. Significance While multivariate decoding algorithms are very useful for prediction their utility for interpretation may be limited when predictors are correlated. PMID:23466267
Classical Michaelis-Menten and system theory approach to modeling metabolite formation kinetics.
Popović, Jovan
2004-01-01
When single doses of drug are administered and kinetics are linear, techniques, which are based on the compartment approach and the linear system theory approach, in modeling the formation of the metabolite from the parent drug are proposed. Unlike the purpose-specific compartment approach, the methodical, conceptual and computational uniformity in modeling various linear biomedical systems is the dominant characteristic of the linear system approach technology. Saturation of the metabolic reaction results in nonlinear kinetics according to the Michaelis-Menten equation. The two compartment open model with Michaelis-Menten elimination kinetics is theorethicaly basic when single doses of drug are administered. To simulate data or to fit real data using this model, one must resort to numerical integration. A biomathematical model for multiple dosage regimen calculations of nonlinear metabolic systems in steady-state and a working example with phenytoin are presented. High correlation between phenytoin steady-state serum levels calculated from individual Km and Vmax values in the 15 adult epileptic outpatients and the observed levels at the third adjustment of phenytoin daily dose (r=0.961, p<0.01) were found.
Masurel, R J; Gelineau, P; Lequeux, F; Cantournet, S; Montes, H
2017-12-27
In this paper we focus on the role of dynamical heterogeneities on the non-linear response of polymers in the glass transition domain. We start from a simple coarse-grained model that assumes a random distribution of the initial local relaxation times and that quantitatively describes the linear viscoelasticity of a polymer in the glass transition regime. We extend this model to non-linear mechanics assuming a local Eyring stress dependence of the relaxation times. Implementing the model in a finite element mechanics code, we derive the mechanical properties and the local mechanical fields at the beginning of the non-linear regime. The model predicts a narrowing of distribution of relaxation times and the storage of a part of the mechanical energy --internal stress-- transferred to the material during stretching in this temperature range. We show that the stress field is not spatially correlated under and after loading and follows a Gaussian distribution. In addition the strain field exhibits shear bands, but the strain distribution is narrow. Hence, most of the mechanical quantities can be calculated analytically, in a very good approximation, with the simple assumption that the strain rate is constant.
Cognitive flexibility correlates with gambling severity in young adults.
Leppink, Eric W; Redden, Sarah A; Chamberlain, Samuel R; Grant, Jon E
2016-10-01
Although gambling disorder (GD) is often characterized as a problem of impulsivity, compulsivity has recently been proposed as a potentially important feature of addictive disorders. The present analysis assessed the neurocognitive and clinical relationship between compulsivity on gambling behavior. A sample of 552 non-treatment seeking gamblers age 18-29 was recruited from the community for a study on gambling in young adults. Gambling severity levels included both casual and disordered gamblers. All participants completed the Intra/Extra-Dimensional Set Shift (IED) task, from which the total adjusted errors were correlated with gambling severity measures, and linear regression modeling was used to assess three error measures from the task. The present analysis found significant positive correlations between problems with cognitive flexibility and gambling severity (reflected by the number of DSM-5 criteria, gambling frequency, amount of money lost in the past year, and gambling urge/behavior severity). IED errors also showed a positive correlation with self-reported compulsive behavior scores. A significant correlation was also found between IED errors and non-planning impulsivity from the BIS. Linear regression models based on total IED errors, extra-dimensional (ED) shift errors, or pre-ED shift errors indicated that these factors accounted for a significant portion of the variance noted in several variables. These findings suggest that cognitive flexibility may be an important consideration in the assessment of gamblers. Results from correlational and linear regression analyses support this possibility, but the exact contributions of both impulsivity and cognitive flexibility remain entangled. Future studies will ideally be able to assess the longitudinal relationships between gambling, compulsivity, and impulsivity, helping to clarify the relative contributions of both impulsive and compulsive features. Copyright © 2016 Elsevier Ltd. All rights reserved.
Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario
2015-01-01
Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.
Hou, Tingjun; Xu, Xiaojie
2002-12-01
In this study, the relationships between the brain-blood concentration ratio of 96 structurally diverse compounds with a large number of structurally derived descriptors were investigated. The linear models were based on molecular descriptors that can be calculated for any compound simply from a knowledge of its molecular structure. The linear correlation coefficients of the models were optimized by genetic algorithms (GAs), and the descriptors used in the linear models were automatically selected from 27 structurally derived descriptors. The GA optimizations resulted in a group of linear models with three or four molecular descriptors with good statistical significance. The change of descriptor use as the evolution proceeds demonstrates that the octane/water partition coefficient and the partial negative solvent-accessible surface area multiplied by the negative charge are crucial to brain-blood barrier permeability. Moreover, we found that the predictions using multiple QSPR models from GA optimization gave quite good results in spite of the diversity of structures, which was better than the predictions using the best single model. The predictions for the two external sets with 37 diverse compounds using multiple QSPR models indicate that the best linear models with four descriptors are sufficiently effective for predictive use. Considering the ease of computation of the descriptors, the linear models may be used as general utilities to screen the blood-brain barrier partitioning of drugs in a high-throughput fashion.
Understanding the determinants of volatility clustering in terms of stationary Markovian processes
NASA Astrophysics Data System (ADS)
Miccichè, S.
2016-11-01
Volatility is a key variable in the modeling of financial markets. The most striking feature of volatility is that it is a long-range correlated stochastic variable, i.e. its autocorrelation function decays like a power-law τ-β for large time lags. In the present work we investigate the determinants of such feature, starting from the empirical observation that the exponent β of a certain stock's volatility is a linear function of the average correlation of such stock's volatility with all other volatilities. We propose a simple approach consisting in diagonalizing the cross-correlation matrix of volatilities and investigating whether or not the diagonalized volatilities still keep some of the original volatility stylized facts. As a result, the diagonalized volatilities result to share with the original volatilities either the power-law decay of the probability density function and the power-law decay of the autocorrelation function. This would indicate that volatility clustering is already present in the diagonalized un-correlated volatilities. We therefore present a parsimonious univariate model based on a non-linear Langevin equation that well reproduces these two stylized facts of volatility. The model helps us in understanding that the main source of volatility clustering, once volatilities have been diagonalized, is that the economic forces driving volatility can be modeled in terms of a Smoluchowski potential with logarithmic tails.
Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally
2018-02-01
1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE < 10.3%. The external data evaluation showed that the models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.
Theory of correlation in a network with synaptic depression
NASA Astrophysics Data System (ADS)
Igarashi, Yasuhiko; Oizumi, Masafumi; Okada, Masato
2012-01-01
Synaptic depression affects not only the mean responses of neurons but also the correlation of response variability in neural populations. Although previous studies have constructed a theory of correlation in a spiking neuron model by using the mean-field theory framework, synaptic depression has not been taken into consideration. We expanded the previous theoretical framework in this study to spiking neuron models with short-term synaptic depression. On the basis of this theory we analytically calculated neural correlations in a ring attractor network with Mexican-hat-type connectivity, which was used as a model of the primary visual cortex. The results revealed that synaptic depression reduces neural correlation, which could be beneficial for sensory coding. Furthermore, our study opens the way for theoretical studies on the effect of interaction change on the linear response function in large stochastic networks.
NASA Technical Reports Server (NTRS)
Moisan, John R.; Moisan, Tiffany A. H.; Linkswiler, Matthew A.
2011-01-01
Phytoplankton absorption spectra and High-Performance Liquid Chromatography (HPLC) pigment observations from the Eastern U.S. and global observations from NASA's SeaBASS archive are used in a linear inverse calculation to extract pigment-specific absorption spectra. Using these pigment-specific absorption spectra to reconstruct the phytoplankton absorption spectra results in high correlations at all visible wavelengths (r(sup 2) from 0.83 to 0.98), and linear regressions (slopes ranging from 0.8 to 1.1). Higher correlations (r(sup 2) from 0.75 to 1.00) are obtained in the visible portion of the spectra when the total phytoplankton absorption spectra are unpackaged by multiplying the entire spectra by a factor that sets the total absorption at 675 nm to that expected from absorption spectra reconstruction using measured pigment concentrations and laboratory-derived pigment-specific absorption spectra. The derived pigment-specific absorption spectra were further used with the total phytoplankton absorption spectra in a second linear inverse calculation to estimate the various phytoplankton HPLC pigments. A comparison between the estimated and measured pigment concentrations for the 18 pigment fields showed good correlations (r(sup 2) greater than 0.5) for 7 pigments and very good correlations (r(sup 2) greater than 0.7) for chlorophyll a and fucoxanthin. Higher correlations result when the analysis is carried out at more local geographic scales. The ability to estimate phytoplankton pigments using pigment-specific absorption spectra is critical for using hyperspectral inverse models to retrieve phytoplankton pigment concentrations and other Inherent Optical Properties (IOPs) from passive remote sensing observations.
Pinto, Luciano Moreira; Costa, Elaine Fiod; Melo, Luiz Alberto S; Gross, Paula Blasco; Sato, Eduardo Toshio; Almeida, Andrea Pereira; Maia, Andre; Paranhos, Augusto
2014-04-10
We examined the structure-function relationship between two perimetric tests, the frequency doubling technology (FDT) matrix and standard automated perimetry (SAP), and two optical coherence tomography (OCT) devices (time-domain and spectral-domain). This cross-sectional study included 97 eyes from 29 healthy individuals, and 68 individuals with early, moderate, or advanced primary open-angle glaucoma. The correlations between overall and sectorial parameters of retinal nerve fiber layer thickness (RNFL) measured with Stratus and Spectralis OCT, and the visual field sensitivity obtained with FDT matrix and SAP were assessed. The relationship also was evaluated using a previously described linear model. The correlation coefficients for the threshold sensitivity measured with SAP and Stratus OCT ranged from 0.44 to 0.79, and those for Spectralis OCT ranged from 0.30 to 0.75. Regarding FDT matrix, the correlation ranged from 0.40 to 0.79 with Stratus OCT and from 0.39 to 0.79 with Spectralis OCT. Stronger correlations were found in the overall measurements and the arcuate sectors for both visual fields and OCT devices. A linear relationship was observed between FDT matrix sensitivity and the OCT devices. The previously described linear model fit the data from SAP and the OCT devices well, particularly in the inferotemporal sector. The FDT matrix and SAP visual sensitivities were related strongly to the RNFL thickness measured with the Stratus and Spectralis OCT devices, particularly in the overall and arcuate sectors. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
Linear Regression between CIE-Lab Color Parameters and Organic Matter in Soils of Tea Plantations
NASA Astrophysics Data System (ADS)
Chen, Yonggen; Zhang, Min; Fan, Dongmei; Fan, Kai; Wang, Xiaochang
2018-02-01
To quantify the relationship between the soil organic matter and color parameters using the CIE-Lab system, 62 soil samples (0-10 cm, Ferralic Acrisols) from tea plantations were collected from southern China. After air-drying and sieving, numerical color information and reflectance spectra of soil samples were measured under laboratory conditions using an UltraScan VIS (HunterLab) spectrophotometer equipped with CIE-Lab color models. We found that soil total organic carbon (TOC) and nitrogen (TN) contents were negatively correlated with the L* value (lightness) ( r = -0.84 and -0.80, respectively), a* value (correlation coefficient r = -0.51 and -0.46, respectively) and b* value ( r = -0.76 and -0.70, respectively). There were also linear regressions between TOC and TN contents with the L* value and b* value. Results showed that color parameters from a spectrophotometer equipped with CIE-Lab color models can predict TOC contents well for soils in tea plantations. The linear regression model between color values and soil organic carbon contents showed it can be used as a rapid, cost-effective method to evaluate content of soil organic matter in Chinese tea plantations.
Benchmarking a Soil Moisture Data Assimilation System for Agricultural Drought Monitoring
NASA Technical Reports Server (NTRS)
Hun, Eunjin; Crow, Wade T.; Holmes, Thomas; Bolten, John
2014-01-01
Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this need, this paper evaluates an LDAS for agricultural drought monitoring by benchmarking individual components of the system (i.e., a satellite soil moisture retrieval algorithm, a soil water balance model and a sequential data assimilation filter) against a series of linear models which perform the same function (i.e., have the same basic inputoutput structure) as the full system component. Benchmarking is based on the calculation of the lagged rank cross-correlation between the normalized difference vegetation index (NDVI) and soil moisture estimates acquired for various components of the system. Lagged soil moistureNDVI correlations obtained using individual LDAS components versus their linear analogs reveal the degree to which non-linearities andor complexities contained within each component actually contribute to the performance of the LDAS system as a whole. Here, a particular system based on surface soil moisture retrievals from the Land Parameter Retrieval Model (LPRM), a two-layer Palmer soil water balance model and an Ensemble Kalman filter (EnKF) is benchmarked. Results suggest significant room for improvement in each component of the system.
Characterizing subcritical assemblies with time of flight fixed by energy estimation distributions
NASA Astrophysics Data System (ADS)
Monterial, Mateusz; Marleau, Peter; Pozzi, Sara
2018-04-01
We present the Time of Flight Fixed by Energy Estimation (TOFFEE) as a measure of the fission chain dynamics in subcritical assemblies. TOFFEE is the time between correlated gamma rays and neutrons, subtracted by the estimated travel time of the incident neutron from its proton recoil. The measured subcritical assembly was the BeRP ball, a 4.482 kg sphere of α-phase weapons grade plutonium metal, which came in five configurations: bare, 0.5, 1, and 1.5 in iron, and 1 in nickel closed fitting shell reflectors. We extend the measurement with MCNPX-PoliMi simulations of shells ranging up to 6 inches in thickness, and two new reflector materials: aluminum and tungsten. We also simulated the BeRP ball with different masses ranging from 1 to 8 kg. A two-region and single-region point kinetics models were used to model the behavior of the positive side of the TOFFEE distribution from 0 to 100 ns. The single region model of the bare cases gave positive linear correlations between estimated and expected neutron decay constants and leakage multiplications. The two-region model provided a way to estimate neutron multiplication for the reflected cases, which correlated positively with expected multiplication, but the nature of the correlation (sub or superlinear) changed between material types. Finally, we found that the areal density of the reflector shells had a linear correlation with the integral of the two-region model fit. Therefore, we expect that with knowledge of reflector composition, one could determine the shell thickness, or vice versa. Furthermore, up to a certain amount and thickness of the reflector, the two-region model provides a way of distinguishing bare and reflected plutonium assemblies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abrecht, David G.; Schwantes, Jon M.
This paper extends the preliminary linear free energy correlations for radionuclide release performed by Schwantes, et al., following the Fukushima-Daiichi Nuclear Power Plant accident. Through evaluations of the molar fractionations of radionuclides deposited in the soil relative to modeled radionuclide inventories, we confirm the source of the radionuclides to be from active reactors rather than the spent fuel pool. Linear correlations of the form ln χ = -α (ΔG rxn°(T C))/(RT C)+β were obtained between the deposited concentration and the reduction potential of the fission product oxide species using multiple reduction schemes to calculate ΔG° rxn(T C). These models allowedmore » an estimate of the upper bound for the reactor temperatures of T C between 2130 K and 2220 K, providing insight into the limiting factors to vaporization and release of fission products during the reactor accident. Estimates of the release of medium-lived fission products 90Sr, 121mSn, 147Pm, 144Ce, 152Eu, 154Eu, 155Eu, 151Sm through atmospheric venting and releases during the first month following the accident were performed, and indicate large quantities of 90Sr and radioactive lanthanides were likely to remain in the damaged reactor cores.« less
Method of Individual Adjustment for 3D CT Analysis: Linear Measurement.
Kim, Dong Kyu; Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae; Choi, Kang Young
2016-01-01
Introduction . We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods . We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results . The real values and the PACS measurement changes according to tilt value have no significant correlations ( p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements ( p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion . Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction.
De Benedetti, Pier G; Fanelli, Francesca
2018-03-21
Simple comparative correlation analyses and quantitative structure-kinetics relationship (QSKR) models highlight the interplay of kinetic rates and binding affinity as an essential feature in drug design and discovery. The choice of the molecular series, and their structural variations, used in QSKR modeling is fundamental to understanding the mechanistic implications of ligand and/or drug-target binding and/or unbinding processes. Here, we discuss the implications of linear correlations between kinetic rates and binding affinity constants and the relevance of the computational approaches to QSKR modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2016-01-15
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.
Preliminary Survey on TRY Forest Traits and Growth Index Relations - New Challenges
NASA Astrophysics Data System (ADS)
Lyubenova, Mariyana; Kattge, Jens; van Bodegom, Peter; Chikalanov, Alexandre; Popova, Silvia; Zlateva, Plamena; Peteva, Simona
2016-04-01
Forest ecosystems provide critical ecosystem goods and services, including food, fodder, water, shelter, nutrient cycling, and cultural and recreational value. Forests also store carbon, provide habitat for a wide range of species and help alleviate land degradation and desertification. Thus they have a potentially significant role to play in climate change adaptation planning through maintaining ecosystem services and providing livelihood options. Therefore the study of forest traits is such an important issue not just for individual countries but for the planet as a whole. We need to know what functional relations between forest traits exactly can express TRY data base and haw it will be significant for the global modeling and IPBES. The study of the biodiversity characteristics at all levels and functional links between them is extremely important for the selection of key indicators for assessing biodiversity and ecosystem services for sustainable natural capital control. By comparing the available information in tree data bases: TRY, ITR (International Tree Ring) and SP-PAM the 42 tree species are selected for the traits analyses. The dependence between location characteristics (latitude, longitude, altitude, annual precipitation, annual temperature and soil type) and forest traits (specific leaf area, leaf weight ratio, wood density and growth index) is studied by by multiply regression analyses (RDA) using the statistical software package Canoco 4.5. The Pearson correlation coefficient (measure of linear correlation), Kendal rank correlation coefficient (non parametric measure of statistical dependence) and Spearman correlation coefficient (monotonic function relationship between two variables) are calculated for each pair of variables (indexes) and species. After analysis of above mentioned correlation coefficients the dimensional linear regression models, multidimensional linear and nonlinear regression models and multidimensional neural networks models are built. The strongest dependence between It and WD was obtained. The research will support the work on: Strategic Plan for Biodiversity 2011-2020, modelling and implementation of ecosystem-based approaches to climate change adaptation and disaster risk reduction. Key words: Specific leaf area (SLA), Leaf weight ratio (LWR), Wood density (WD), Growth index (It)
Correlation Of Deviance In Arterial Oxygenation With Severity Of Chronic Liver Disease.
Shaukat, Al-Aman; Zar, Adnan; Zuhaid, Muhammad; Afridi, Safa Saadat
2016-01-01
Hepatitis B and C related chronic liver diseases have led to development of a serious threat to the people of South Asia. The main aim of this study was to evaluate the correlation of magnitude of arterial deoxygention to the severity of liver disease. It was a hospital based cross-sectional descriptive study, carried out in the Medical Department of Khyber Teaching Hospital Peshawar. All in all 115 patients were assessed for the severity of the liver diseases and were correlated with arterial deoxygenation using linear regression models. Male to female ratio was 1.5:1. Males infected with hepatitis B, hepatitis C and both were 9, 60 and 1, while females suffered from hepatitis B, Hepatitis C and both were 2, 42 and 1 respectively. The linear relationship between A-a DO2 with severity of liver disease showed positive correlation while PO2 showed negative correlation with severity of liver disease. There was a positive correlation between A-a DO2 and severity of liver diseases while PO2 and severity of liver diseases showed negative correlation.
Data analytics using canonical correlation analysis and Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles
2017-07-01
A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
Generalized linear mixed models with varying coefficients for longitudinal data.
Zhang, Daowen
2004-03-01
The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.
Women's Endorsement of Models of Sexual Response: Correlates and Predictors.
Nowosielski, Krzysztof; Wróbel, Beata; Kowalczyk, Robert
2016-02-01
Few studies have investigated endorsement of female sexual response models, and no single model has been accepted as a normative description of women's sexual response. The aim of the study was to establish how women from a population-based sample endorse current theoretical models of the female sexual response--the linear models and circular model (partial and composite Basson models)--as well as predictors of endorsement. Accordingly, 174 heterosexual women aged 18-55 years were included in a cross-sectional study: 74 women diagnosed with female sexual dysfunction (FSD) based on DSM-5 criteria and 100 non-dysfunctional women. The description of sexual response models was used to divide subjects into four subgroups: linear (Masters-Johnson and Kaplan models), circular (partial Basson model), mixed (linear and circular models in similar proportions, reflective of the composite Basson model), and a different model. Women were asked to choose which of the models best described their pattern of sexual response and how frequently they engaged in each model. Results showed that 28.7% of women endorsed the linear models, 19.5% the partial Basson model, 40.8% the composite Basson model, and 10.9% a different model. Women with FSD endorsed the partial Basson model and a different model more frequently than did non-dysfunctional controls. Individuals who were dissatisfied with a partner as a lover were more likely to endorse a different model. Based on the results, we concluded that the majority of women endorsed a mixed model combining the circular response with the possibility of an innate desire triggering a linear response. Further, relationship difficulties, not FSD, predicted model endorsement.
Evaluation of electrical impedance ratio measurements in accuracy of electronic apex locators.
Kim, Pil-Jong; Kim, Hong-Gee; Cho, Byeong-Hoon
2015-05-01
The aim of this paper was evaluating the ratios of electrical impedance measurements reported in previous studies through a correlation analysis in order to explicit it as the contributing factor to the accuracy of electronic apex locator (EAL). The literature regarding electrical property measurements of EALs was screened using Medline and Embase. All data acquired were plotted to identify correlations between impedance and log-scaled frequency. The accuracy of the impedance ratio method used to detect the apical constriction (APC) in most EALs was evaluated using linear ramp function fitting. Changes of impedance ratios for various frequencies were evaluated for a variety of file positions. Among the ten papers selected in the search process, the first-order equations between log-scaled frequency and impedance were in the negative direction. When the model for the ratios was assumed to be a linear ramp function, the ratio values decreased if the file went deeper and the average ratio values of the left and right horizontal zones were significantly different in 8 out of 9 studies. The APC was located within the interval of linear relation between the left and right horizontal zones of the linear ramp model. Using the ratio method, the APC was located within a linear interval. Therefore, using the impedance ratio between electrical impedance measurements at different frequencies was a robust method for detection of the APC.
McKellar, Robin C
2008-01-15
Developing accurate mathematical models to describe the pre-exponential lag phase in food-borne pathogens presents a considerable challenge to food microbiologists. While the growth rate is influenced by current environmental conditions, the lag phase is affected in addition by the history of the inoculum. A deeper understanding of physiological changes taking place during the lag phase would improve accuracy of models, and in earlier studies a strain of Pseudomonas fluorescens containing the Tn7-luxCDABE gene cassette regulated by the rRNA promoter rrnB P2 was used to measure the influence of starvation, growth temperature and sub-lethal heating on promoter expression and subsequent growth. The present study expands the models developed earlier to include a model which describes the change from exponential to linear increase in promoter expression with time when the exponential phase of growth commences. A two-phase linear model with Poisson weighting was used to estimate the lag (LPDLin) and the rate (RLin) for this linear increase in bioluminescence. The Spearman rank correlation coefficient (r=0.830) between the LPDLin and the growth lag phase (LPDOD) was extremely significant (P
The Relationship Between Surface Curvature and Abdominal Aortic Aneurysm Wall Stress.
de Galarreta, Sergio Ruiz; Cazón, Aitor; Antón, Raúl; Finol, Ender A
2017-08-01
The maximum diameter (MD) criterion is the most important factor when predicting risk of rupture of abdominal aortic aneurysms (AAAs). An elevated wall stress has also been linked to a high risk of aneurysm rupture, yet is an uncommon clinical practice to compute AAA wall stress. The purpose of this study is to assess whether other characteristics of the AAA geometry are statistically correlated with wall stress. Using in-house segmentation and meshing algorithms, 30 patient-specific AAA models were generated for finite element analysis (FEA). These models were subsequently used to estimate wall stress and maximum diameter and to evaluate the spatial distributions of wall thickness, cross-sectional diameter, mean curvature, and Gaussian curvature. Data analysis consisted of statistical correlations of the aforementioned geometry metrics with wall stress for the 30 AAA inner and outer wall surfaces. In addition, a linear regression analysis was performed with all the AAA wall surfaces to quantify the relationship of the geometric indices with wall stress. These analyses indicated that while all the geometry metrics have statistically significant correlations with wall stress, the local mean curvature (LMC) exhibits the highest average Pearson's correlation coefficient for both inner and outer wall surfaces. The linear regression analysis revealed coefficients of determination for the outer and inner wall surfaces of 0.712 and 0.516, respectively, with LMC having the largest effect on the linear regression equation with wall stress. This work underscores the importance of evaluating AAA mean wall curvature as a potential surrogate for wall stress.
Douglas, Alexander D.; Edwards, Nick J.; Duncan, Christopher J. A.; Thompson, Fiona M.; Sheehy, Susanne H.; O'Hara, Geraldine A.; Anagnostou, Nicholas; Walther, Michael; Webster, Daniel P.; Dunachie, Susanna J.; Porter, David W.; Andrews, Laura; Gilbert, Sarah C.; Draper, Simon J.; Hill, Adrian V. S.; Bejon, Philip
2013-01-01
Controlled human malaria infection is used to measure efficacy of candidate malaria vaccines before field studies are undertaken. Mathematical modeling using data from quantitative polymerase chain reaction (qPCR) parasitemia monitoring can discriminate between vaccine effects on the parasite's liver and blood stages. Uncertainty regarding the most appropriate modeling method hinders interpretation of such trials. We used qPCR data from 267 Plasmodium falciparum infections to compare linear, sine-wave, and normal-cumulative-density-function models. We find that the parameters estimated by these models are closely correlated, and their predictive accuracy for omitted data points was similar. We propose that future studies include the linear model. PMID:23570846
Characteristic correlation study of UV disinfection performance for ballast water treatment
NASA Astrophysics Data System (ADS)
Ba, Te; Li, Hongying; Osman, Hafiiz; Kang, Chang-Wei
2016-11-01
Characteristic correlation between ultraviolet disinfection performance and operating parameters, including ultraviolet transmittance (UVT), lamp power and water flow rate, was studied by numerical and experimental methods. A three-stage model was developed to simulate the fluid flow, UV radiation and the trajectories of microorganisms. Navier-Stokes equation with k-epsilon turbulence was solved to model the fluid flow, while discrete ordinates (DO) radiation model and discrete phase model (DPM) were used to introduce UV radiation and microorganisms trajectories into the model, respectively. The UV dose statistical distribution for the microorganisms was found to move to higher value with the increase of UVT and lamp power, but moves to lower value when the water flow rate increases. Further investigation shows that the fluence rate increases exponentially with UVT but linearly with the lamp power. The average and minimum resident time decreases linearly with the water flow rate while the maximum resident time decrease rapidly in a certain range. The current study can be used as a digital design and performance evaluation tool of the UV reactor for ballast water treatment.
Taki, Yasuyuki; Hashizume, Hiroshi; Thyreau, Benjamin; Sassa, Yuko; Takeuchi, Hikaru; Wu, Kai; Kotozaki, Yuka; Nouchi, Rui; Asano, Michiko; Asano, Kohei; Fukuda, Hiroshi; Kawashima, Ryuta
2013-08-01
We examined linear and curvilinear correlations of gray matter volume and density in cortical and subcortical gray matter with age using magnetic resonance images (MRI) in a large number of healthy children. We applied voxel-based morphometry (VBM) and region-of-interest (ROI) analyses with the Akaike information criterion (AIC), which was used to determine the best-fit model by selecting which predictor terms should be included. We collected data on brain structural MRI in 291 healthy children aged 5-18 years. Structural MRI data were segmented and normalized using a custom template by applying the diffeomorphic anatomical registration using exponentiated lie algebra (DARTEL) procedure. Next, we analyzed the correlations of gray matter volume and density with age in VBM with AIC by estimating linear, quadratic, and cubic polynomial functions. Several regions such as the prefrontal cortex, the precentral gyrus, and cerebellum showed significant linear or curvilinear correlations between gray matter volume and age on an increasing trajectory, and between gray matter density and age on a decreasing trajectory in VBM and ROI analyses with AIC. Because the trajectory of gray matter volume and density with age suggests the progress of brain maturation, our results may contribute to clarifying brain maturation in healthy children from the viewpoint of brain structure. Copyright © 2012 Wiley Periodicals, Inc.
Lenz, Kasia; McRae, Andrew; Wang, Dongmei; Higgins, Benjamin; Innes, Grant; Cook, Timothy; Lang, Eddy
2017-09-01
Absract OBJECTIVES: To evaluate the relationship between Emergency Physician (EP) productivity and patient satisfaction with Emergency Department (ED) care. This retrospective observational study linked administrative and patient experience databases to measure correlations between the patient experience and EP productivity. The study was performed across three Calgary EDs (from June 2010 to July 2013). Patients>16 years old with completed Health Quality Council of Alberta (HQCA) ED Patient Experience Surveys were included. EP productivity was measured at the individual physician level and defined as the average number of patients seen per hour. The association between physician productivity and patient experience scores from six composite domains of the HQCA ED Patient Experience Survey were examined using Pearson correlation coefficients, linear regression modelling, and a path analysis. We correlated 3,794 patient experience surveys with productivity data for 130 EPs. Very weak non-significant negative correlations existed between productivity and survey composites: "Staff Care and Communication" (r=-0.057, p=0.521), "Discharge Communication" (r=-0.144, p=0.102), and "Respect" (r=-0.027, p=0.760). Very weak, non-significant positive correlations existed between productivity and the composite domains: "Medication Communication" (r=0.003, p=0.974) and "Pain management" (r=0.020, p=0.824). A univariate general linear model yielded no statistically significant correlations between EP productivity and patient experience, and the path analysis failed to show a relationship between the variables. We found no correlation between EP productivity and the patient experience.
Nejaim, Yuri; Aps, Johan K M; Groppo, Francisco Carlos; Haiter Neto, Francisco
2018-06-01
The purpose of this article was to evaluate the pharyngeal space volume, and the size and shape of the mandible and the hyoid bone, as well as their relationships, in patients with different facial types and skeletal classes. Furthermore, we estimated the volume of the pharyngeal space with a formula using only linear measurements. A total of 161 i-CAT Next Generation (Imaging Sciences International, Hatfield, Pa) cone-beam computed tomography images (80 men, 81 women; ages, 21-58 years; mean age, 27 years) were retrospectively studied. Skeletal class and facial type were determined for each patient from multiplanar reconstructions using the NemoCeph software (Nemotec, Madrid, Spain). Linear and angular measurements were performed using 3D imaging software (version 3.4.3; Carestream Health, Rochester, NY), and volumetric analysis of the pharyngeal space was carried out with ITK-SNAP (version 2.4.0; Cognitica, Philadelphia, Pa) segmentation software. For the statistics, analysis of variance and the Tukey test with a significance level of 0.05, Pearson correlation, and linear regression were used. The pharyngeal space volume, when correlated with mandible and hyoid bone linear and angular measurements, showed significant correlations with skeletal class or facial type. The linear regression performed to estimate the volume of the pharyngeal space showed an R of 0.92 and an adjusted R 2 of 0.8362. There were significant correlations between pharyngeal space volume, and the mandible and hyoid bone measurements, suggesting that the stomatognathic system should be evaluated in an integral and nonindividualized way. Furthermore, it was possible to develop a linear regression model, resulting in a useful formula for estimating the volume of the pharyngeal space. Copyright © 2018 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Prediction of atmospheric degradation data for POPs by gene expression programming.
Luan, F; Si, H Z; Liu, H T; Wen, Y Y; Zhang, X Y
2008-01-01
Quantitative structure-activity relationship models for the prediction of the mean and the maximum atmospheric degradation half-life values of persistent organic pollutants were developed based on the linear heuristic method (HM) and non-linear gene expression programming (GEP). Molecular descriptors, calculated from the structures alone, were used to represent the characteristics of the compounds. HM was used both to pre-select the whole descriptor sets and to build the linear model. GEP yielded satisfactory prediction results: the square of the correlation coefficient r(2) was 0.80 and 0.81 for the mean and maximum half-life values of the test set, and the root mean square errors were 0.448 and 0.426, respectively. The results of this work indicate that the GEP is a very promising tool for non-linear approximations.
Testing Multi-Alternative Decision Models with Non-Stationary Evidence
Tsetsos, Konstantinos; Usher, Marius; McClelland, James L.
2011-01-01
Recent research has investigated the process of integrating perceptual evidence toward a decision, converging on a number of sequential sampling choice models, such as variants of race and diffusion models and the non-linear leaky competing accumulator (LCA) model. Here we study extensions of these models to multi-alternative choice, considering how well they can account for data from a psychophysical experiment in which the evidence supporting each of the alternatives changes dynamically during the trial, in a way that creates temporal correlations. We find that participants exhibit a tendency to choose an alternative whose evidence profile is temporally anti-correlated with (or dissimilar from) that of other alternatives. This advantage of the anti-correlated alternative is well accounted for in the LCA, and provides constraints that challenge several other models of multi-alternative choice. PMID:21603227
Testing multi-alternative decision models with non-stationary evidence.
Tsetsos, Konstantinos; Usher, Marius; McClelland, James L
2011-01-01
Recent research has investigated the process of integrating perceptual evidence toward a decision, converging on a number of sequential sampling choice models, such as variants of race and diffusion models and the non-linear leaky competing accumulator (LCA) model. Here we study extensions of these models to multi-alternative choice, considering how well they can account for data from a psychophysical experiment in which the evidence supporting each of the alternatives changes dynamically during the trial, in a way that creates temporal correlations. We find that participants exhibit a tendency to choose an alternative whose evidence profile is temporally anti-correlated with (or dissimilar from) that of other alternatives. This advantage of the anti-correlated alternative is well accounted for in the LCA, and provides constraints that challenge several other models of multi-alternative choice.
Kallehauge, Jesper F; Sourbron, Steven; Irving, Benjamin; Tanderup, Kari; Schnabel, Julia A; Chappell, Michael A
2017-06-01
Fitting tracer kinetic models using linear methods is much faster than using their nonlinear counterparts, although this comes often at the expense of reduced accuracy and precision. The aim of this study was to derive and compare the performance of the linear compartmental tissue uptake (CTU) model with its nonlinear version with respect to their percentage error and precision. The linear and nonlinear CTU models were initially compared using simulations with varying noise and temporal sampling. Subsequently, the clinical applicability of the linear model was demonstrated on 14 patients with locally advanced cervical cancer examined with dynamic contrast-enhanced magnetic resonance imaging. Simulations revealed equal percentage error and precision when noise was within clinical achievable ranges (contrast-to-noise ratio >10). The linear method was significantly faster than the nonlinear method, with a minimum speedup of around 230 across all tested sampling rates. Clinical analysis revealed that parameters estimated using the linear and nonlinear CTU model were highly correlated (ρ ≥ 0.95). The linear CTU model is computationally more efficient and more stable against temporal downsampling, whereas the nonlinear method is more robust to variations in noise. The two methods may be used interchangeably within clinical achievable ranges of temporal sampling and noise. Magn Reson Med 77:2414-2423, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richert, Ranko
2016-03-21
A model of non-linear dielectric polarization is studied in which the field induced entropy change is the source of polarization dependent retardation time constants. Numerical solutions for the susceptibilities of the system are obtained for parameters that represent the dynamic and thermodynamic behavior of glycerol. The calculations for high amplitude sinusoidal fields show a significant enhancement of the steady state loss for frequencies below that of the low field loss peak. Also at relatively low frequencies, the third harmonic susceptibility spectrum shows a “hump,” i.e., a maximum, with an amplitude that increases with decreasing temperature. Both of these non-linear effectsmore » are consistent with experimental evidence. While such features have been used to conclude on a temperature dependent number of dynamically correlated particles, N{sub corr}, the present result demonstrates that the third harmonic susceptibility display a peak with an amplitude that tracks the variation of the activation energy in a model that does not involve dynamical correlations or spatial scales.« less
On the computation of molecular surface correlations for protein docking using fourier techniques.
Sakk, Eric
2007-08-01
The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.
NASA Astrophysics Data System (ADS)
Falvo, Cyril
2018-02-01
The theory of linear and non-linear infrared response of vibrational Holstein polarons in one-dimensional lattices is presented in order to identify the spectral signatures of self-trapping phenomena. Using a canonical transformation, the optical response is computed from the small polaron point of view which is valid in the anti-adiabatic limit. Two types of phonon baths are considered: optical phonons and acoustical phonons, and simple expressions are derived for the infrared response. It is shown that for the case of optical phonons, the linear response can directly probe the polaron density of states. The model is used to interpret the experimental spectrum of crystalline acetanilide in the C=O range. For the case of acoustical phonons, it is shown that two bound states can be observed in the two-dimensional infrared spectrum at low temperature. At high temperature, analysis of the time-dependence of the two-dimensional infrared spectrum indicates that bath mediated correlations slow down spectral diffusion. The model is used to interpret the experimental linear-spectroscopy of model α-helix and β-sheet polypeptides. This work shows that the Davydov Hamiltonian cannot explain the observations in the NH stretching range.
Wear-caused deflection evolution of a slide rail, considering linear and non-linear wear models
NASA Astrophysics Data System (ADS)
Kim, Dongwook; Quagliato, Luca; Park, Donghwi; Murugesan, Mohanraj; Kim, Naksoo; Hong, Seokmoo
2017-05-01
The research presented in this paper details an experimental-numerical approach for the quantitative correlation between wear and end-point deflection in a slide rail. Focusing the attention on slide rail utilized in white-goods applications, the aim is to evaluate the number of cycles the slide rail can operate, under different load conditions, before it should be replaced due to unacceptable end-point deflection. In this paper, two formulations are utilized to describe the wear: Archard model for the linear wear and Lemaitre damage model for the nonlinear wear. The linear wear gradually reduces the surface of the slide rail whereas the nonlinear one accounts for the surface element deletion (i.e. due to pitting). To determine the constants to use in the wear models, simple tension test and sliding wear test, by utilizing a designed and developed experiment machine, have been carried out. A full slide rail model simulation has been implemented in ABAQUS including both linear and non-linear wear models and the results have been compared with those of the real rails under different load condition, provided by the rail manufacturer. The comparison between numerically estimated and real rail results proved the reliability of the developed numerical model, limiting the error in a ±10% range. The proposed approach allows predicting the displacement vs cycle curves, parametrized for different loads and, based on a chosen failure criterion, to predict the lifetime of the rail.
Guimarães, Ricardo J P S; Freitas, Corina C; Dutra, Luciano V; Scholte, Ronaldo G C; Amaral, Ronaldo S; Drummond, Sandra C; Shimabukuro, Yosio E; Oliveira, Guilherme C; Carvalho, Omar S
2010-07-01
This paper analyses the associations between Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) on the prevalence of schistosomiasis and the presence of Biomphalaria glabrata in the state of Minas Gerais (MG), Brazil. Additionally, vegetation, soil and shade fraction images were created using a Linear Spectral Mixture Model (LSMM) from the blue, red and infrared channels of the Moderate Resolution Imaging Spectroradiometer spaceborne sensor and the relationship between these images and the prevalence of schistosomiasis and the presence of B. glabrata was analysed. First, we found a high correlation between the vegetation fraction image and EVI and second, a high correlation between soil fraction image and NDVI. The results also indicate that there was a positive correlation between prevalence and the vegetation fraction image (July 2002), a negative correlation between prevalence and the soil fraction image (July 2002) and a positive correlation between B. glabrata and the shade fraction image (July 2002). This paper demonstrates that the LSMM variables can be used as a substitute for the standard vegetation indices (EVI and NDVI) to determine and delimit risk areas for B. glabrata and schistosomiasis in MG, which can be used to improve the allocation of resources for disease control.
Water mass mixing: The dominant control on the zinc distribution in the North Atlantic Ocean
NASA Astrophysics Data System (ADS)
Roshan, Saeed; Wu, Jingfeng
2015-07-01
Dissolved zinc (dZn) concentration was determined in the North Atlantic during the U.S. GEOTRACES 2010 and 2011 cruise (GOETRACES GA03). A relatively poor linear correlation (R2 = 0.756) was observed between dZn and silicic acid (Si), the slope of which was 0.0577 nM/µmol/kg. We attribute the relatively poor dZn-Si correlation to the following processes: (a) differential regeneration of zinc relative to silicic acid, (b) mixing of multiple water masses that have different Zn/Si, and (c) zinc sources such as sedimentary or hydrothermal. To quantitatively distinguish these possibilities, we use the results of Optimum Multi-Parameter Water Mass Analysis by Jenkins et al. (2015) to model the zinc distribution below 500 m. We hypothesized two scenarios: conservative mixing and regenerative mixing. The first scenario (conservative) could be modeled to results in a correlation with observations with a R2 = 0.846. In the second scenario, we took a Si-related regeneration into account, which could model the observations with a R2 = 0.867. Through this regenerative mixing scenario, we estimated a Zn/Si = 0.0548 nM/µmol/kg that may be more realistic than linear regression slope due to accounting for process b. However, this did not improve the model substantially (R2 = 0.867 versus0.846), which may indicate the insignificant effect of remineralization on the zinc distribution in this region. The relative weakness in the model-observation correlation (R2~0.85 for both scenarios) implies that processes (a) and (c) may be plausible. Furthermore, dZn in the upper 500 m exhibited a very poor correlation with apparent oxygen utilization, suggesting a minimal role for the organic matter-associated remineralization process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Fengbin, E-mail: fblu@amss.ac.cn
This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor’s 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relationsmore » evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model.« less
Lu, Fengbin; Qiao, Han; Wang, Shouyang; Lai, Kin Keung; Li, Yuze
2017-01-01
This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor's 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Non-parametric causality detection: An application to social media and financial data
NASA Astrophysics Data System (ADS)
Tsapeli, Fani; Musolesi, Mirco; Tino, Peter
2017-10-01
According to behavioral finance, stock market returns are influenced by emotional, social and psychological factors. Several recent works support this theory by providing evidence of correlation between stock market prices and collective sentiment indexes measured using social media data. However, a pure correlation analysis is not sufficient to prove that stock market returns are influenced by such emotional factors since both stock market prices and collective sentiment may be driven by a third unmeasured factor. Controlling for factors that could influence the study by applying multivariate regression models is challenging given the complexity of stock market data. False assumptions about the linearity or non-linearity of the model and inaccuracies on model specification may result in misleading conclusions. In this work, we propose a novel framework for causal inference that does not require any assumption about a particular parametric form of the model expressing statistical relationships among the variables of the study and can effectively control a large number of observed factors. We apply our method in order to estimate the causal impact that information posted in social media may have on stock market returns of four big companies. Our results indicate that social media data not only correlate with stock market returns but also influence them.
A polynomial based model for cell fate prediction in human diseases.
Ma, Lichun; Zheng, Jie
2017-12-21
Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.
Rostami, Reza; Sadeghi, Vahid; Zarei, Jamileh; Haddadi, Parvaneh; Mohazzab-Torabi, Saman; Salamati, Payman
2013-04-01
The aim of this study was to compare the Persian version of the wechsler intelligence scale for children - fourth edition (WISC-IV) and cognitive assessment system (CAS) tests, to determine the correlation between their scales and to evaluate the probable concurrent validity of these tests in patients with learning disorders. One-hundered-sixty-two children with learning disorder who were presented at Atieh Comprehensive Psychiatry Center were selected in a consecutive non-randomized order. All of the patients were assessed based on WISC-IV and CAS scores questionnaires. Pearson correlation coefficient was used to analyze the correlation between the data and to assess the concurrent validity of the two tests. Linear regression was used for statistical modeling. The type one error was considered 5% in maximum. There was a strong correlation between total score of WISC-IV test and total score of CAS test in the patients (r=0.75, P<0.001). The correlations among the other scales were mostly high and all of them were statistically significant (P<0.001). A linear regression model was obtained (α = 0.51, β = 0.81 and P<0.001). There is an acceptable correlation between the WISC-IV scales and CAS test in children with learning disorders. A concurrent validity is established between the two tests and their scales.
Rostami, Reza; Sadeghi, Vahid; Zarei, Jamileh; Haddadi, Parvaneh; Mohazzab-Torabi, Saman; Salamati, Payman
2013-01-01
Objective The aim of this study was to compare the Persian version of the wechsler intelligence scale for children - fourth edition (WISC-IV) and cognitive assessment system (CAS) tests, to determine the correlation between their scales and to evaluate the probable concurrent validity of these tests in patients with learning disorders. Methods One-hundered-sixty-two children with learning disorder who were presented at Atieh Comprehensive Psychiatry Center were selected in a consecutive non-randomized order. All of the patients were assessed based on WISC-IV and CAS scores questionnaires. Pearson correlation coefficient was used to analyze the correlation between the data and to assess the concurrent validity of the two tests. Linear regression was used for statistical modeling. The type one error was considered 5% in maximum. Findings There was a strong correlation between total score of WISC-IV test and total score of CAS test in the patients (r=0.75, P<0.001). The correlations among the other scales were mostly high and all of them were statistically significant (P<0.001). A linear regression model was obtained (α = 0.51, β = 0.81 and P<0.001). Conclusion There is an acceptable correlation between the WISC-IV scales and CAS test in children with learning disorders. A concurrent validity is established between the two tests and their scales. PMID:23724180
Application of conditional moment tests to model checking for generalized linear models.
Pan, Wei
2002-06-01
Generalized linear models (GLMs) are increasingly being used in daily data analysis. However, model checking for GLMs with correlated discrete response data remains difficult. In this paper, through a case study on marginal logistic regression using a real data set, we illustrate the flexibility and effectiveness of using conditional moment tests (CMTs), along with other graphical methods, to do model checking for generalized estimation equation (GEE) analyses. Although CMTs provide an array of powerful diagnostic tests for model checking, they were originally proposed in the econometrics literature and, to our knowledge, have never been applied to GEE analyses. CMTs cover many existing tests, including the (generalized) score test for an omitted covariate, as special cases. In summary, we believe that CMTs provide a class of useful model checking tools.
Raabe, Joshua K.; Gardner, Beth; Hightower, Joseph E.
2013-01-01
We developed a spatial capture–recapture model to evaluate survival and activity centres (i.e., mean locations) of tagged individuals detected along a linear array. Our spatially explicit version of the Cormack–Jolly–Seber model, analyzed using a Bayesian framework, correlates movement between periods and can incorporate environmental or other covariates. We demonstrate the model using 2010 data for anadromous American shad (Alosa sapidissima) tagged with passive integrated transponders (PIT) at a weir near the mouth of a North Carolina river and passively monitored with an upstream array of PIT antennas. The river channel constrained migrations, resulting in linear, one-dimensional encounter histories that included both weir captures and antenna detections. Individual activity centres in a given time period were a function of the individual’s previous estimated location and the river conditions (i.e., gage height). Model results indicate high within-river spawning mortality (mean weekly survival = 0.80) and more extensive movements during elevated river conditions. This model is applicable for any linear array (e.g., rivers, shorelines, and corridors), opening new opportunities to study demographic parameters, movement or migration, and habitat use.
Characterizing multivariate decoding models based on correlated EEG spectral features.
McFarland, Dennis J
2013-07-01
Multivariate decoding methods are popular techniques for analysis of neurophysiological data. The present study explored potential interpretative problems with these techniques when predictors are correlated. Data from sensorimotor rhythm-based cursor control experiments was analyzed offline with linear univariate and multivariate models. Features were derived from autoregressive (AR) spectral analysis of varying model order which produced predictors that varied in their degree of correlation (i.e., multicollinearity). The use of multivariate regression models resulted in much better prediction of target position as compared to univariate regression models. However, with lower order AR features interpretation of the spectral patterns of the weights was difficult. This is likely to be due to the high degree of multicollinearity present with lower order AR features. Care should be exercised when interpreting the pattern of weights of multivariate models with correlated predictors. Comparison with univariate statistics is advisable. While multivariate decoding algorithms are very useful for prediction their utility for interpretation may be limited when predictors are correlated. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, L.; Han, X. X.; Ge, J.; Wang, C. H.
2018-01-01
To determine the relationship between compressive strength and flexural strength of pavement geopolymer grouting material, 20 groups of geopolymer grouting materials were prepared, the compressive strength and flexural strength were determined by mechanical properties test. On the basis of excluding the abnormal values through boxplot, the results show that, the compressive strength test results were normal, but there were two mild outliers in 7days flexural strength test. The compressive strength and flexural strength were linearly fitted by SPSS, six regression models were obtained by linear fitting of compressive strength and flexural strength. The linear relationship between compressive strength and flexural strength can be better expressed by the cubic curve model, and the correlation coefficient was 0.842.
Disordered wires and quantum chaos in a momentum-space lattice
NASA Astrophysics Data System (ADS)
Meier, Eric; An, Fangzhao; Angonga, Jackson; Gadway, Bryce
2017-04-01
We present two topics: topological wires subjected to disorder and quantum chaos in a spin-J model. These studies are experimentally realized through the use of a momentum-space lattice, in which the dynamics of 87Rb atoms are recorded. In topological wires, a transition to a trivial phase is seen when disorder is applied to either the tunneling strengths or site energies. This transition is detected using both charge-pumping and Hamiltonian-quenching techniques. In the spin-J study we observe the effects of both linear and non-linear spin operations by measuring the linear entropy of the system as well as the out-of-time order correlation function. We further probe the chaotic signatures of the paradigmatic kicked top model.
Accuracy of Robotic Radiosurgical Liver Treatment Throughout the Respiratory Cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, Jeff D.; Wong, Raimond; Swaminath, Anand
Purpose: To quantify random uncertainties in robotic radiosurgical treatment of liver lesions with real-time respiratory motion management. Methods and Materials: We conducted a retrospective analysis of 27 liver cancer patients treated with robotic radiosurgery over 118 fractions. The robotic radiosurgical system uses orthogonal x-ray images to determine internal target position and correlates this position with an external surrogate to provide robotic corrections of linear accelerator positioning. Verification and update of this internal–external correlation model was achieved using periodic x-ray images collected throughout treatment. To quantify random uncertainties in targeting, we analyzed logged tracking information and isolated x-ray images collected immediately beforemore » beam delivery. For translational correlation errors, we quantified the difference between correlation model–estimated target position and actual position determined by periodic x-ray imaging. To quantify prediction errors, we computed the mean absolute difference between the predicted coordinates and actual modeled position calculated 115 milliseconds later. We estimated overall random uncertainty by quadratically summing correlation, prediction, and end-to-end targeting errors. We also investigated relationships between tracking errors and motion amplitude using linear regression. Results: The 95th percentile absolute correlation errors in each direction were 2.1 mm left–right, 1.8 mm anterior–posterior, 3.3 mm cranio–caudal, and 3.9 mm 3-dimensional radial, whereas 95th percentile absolute radial prediction errors were 0.5 mm. Overall 95th percentile random uncertainty was 4 mm in the radial direction. Prediction errors were strongly correlated with modeled target amplitude (r=0.53-0.66, P<.001), whereas only weak correlations existed for correlation errors. Conclusions: Study results demonstrate that model correlation errors are the primary random source of uncertainty in Cyberknife liver treatment and, unlike prediction errors, are not strongly correlated with target motion amplitude. Aggregate 3-dimensional radial position errors presented here suggest the target will be within 4 mm of the target volume for 95% of the beam delivery.« less
Biostatistics Series Module 6: Correlation and Linear Regression.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient ( r ). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P < 0.05. A 95% confidence interval of the correlation coefficient can also be calculated for an idea of the correlation in the population. The value r 2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation ( y = a + bx ), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous.
Biostatistics Series Module 6: Correlation and Linear Regression
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient (r). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P < 0.05. A 95% confidence interval of the correlation coefficient can also be calculated for an idea of the correlation in the population. The value r2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation (y = a + bx), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous. PMID:27904175
Ghoreishi, Mohammad; Abdi-Shahshahani, Mehdi; Peyman, Alireza; Pourazizi, Mohsen
2018-02-21
The aim of this study was to determine the correlation between ocular biometric parameters and sulcus-to-sulcus (STS) diameter. This was a cross-sectional study of preoperative ocular biometry data of patients who were candidates for phakic intraocular lens (IOL) surgery. Subjects underwent ocular biometry analysis, including refraction error evaluation using an autorefractor and Orbscan topography for white-to-white (WTW) corneal diameter and measurement. Pentacam was used to perform WTW corneal diameter and measurements of minimum and maximum keratometry (K). Measurements of STS and angle-to-angle (ATA) were obtained using a 50-MHz B-mode ultrasound device. Anterior optical coherence tomography was performed for anterior chamber depth measurement. Pearson's correlation test and stepwise linear regression analysis were used to find a model to predict STS. Fifty-eight eyes of 58 patients were enrolled. Mean age ± standard deviation of sample was 28.95 ± 6.04 years. The Pearson's correlation coefficient between STS with WTW, ATA, mean K was 0.383, 0.492, and - 0.353, respectively, which was statistically significant (all P < 0.001). Using stepwise linear regression analysis, there is a statistically significant association between STS with WTW (P = 0.011) and mean K (P = 0.025). The standardized coefficient was 0.323 and - 0.284 for WTW and mean K, respectively. The stepwise linear regression analysis equation was: (STS = 9.549 + 0.518 WTW - 0.083 mean K). Based on our result, given the correlation of STS with WTW and mean K and potential of direct and essay measurement of WTW and mean K, it seems that current IOL sizing protocols could be estimating with WTW and mean K.
A quantitative quantum chemical model of the Dewar-Knott color rule for cationic diarylmethanes
NASA Astrophysics Data System (ADS)
Olsen, Seth
2012-04-01
We document the quantitative manifestation of the Dewar-Knott color rule in a four-electron, three-orbital state-averaged complete active space self-consistent field (SA-CASSCF) model of a series of bridge-substituted cationic diarylmethanes. We show that the lowest excitation energies calculated using multireference perturbation theory based on the model are linearly correlated with the development of hole density in an orbital localized on the bridge, and the depletion of pair density in the same orbital. We quantitatively express the correlation in the form of a generalized Hammett equation.
An efficient method for model refinement in diffuse optical tomography
NASA Astrophysics Data System (ADS)
Zirak, A. R.; Khademi, M.
2007-11-01
Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.
Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures.
Palva, J Matias; Wang, Sheng H; Palva, Satu; Zhigalov, Alexander; Monto, Simo; Brookes, Matthew J; Schoffelen, Jan-Mathijs; Jerbi, Karim
2018-06-01
When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed. Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here, however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large numbers of spurious false positive connections through field spread in the vicinity of true interactions. This fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most importantly, beyond defining and illustrating the problem of spurious, or "ghost" interactions, we provide a rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when using measures that are immune to zero-lag correlations. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Nakano, Masahiko; Yoshikawa, Takeshi; Hirata, So; Seino, Junji; Nakai, Hiromi
2017-11-05
We have implemented a linear-scaling divide-and-conquer (DC)-based higher-order coupled-cluster (CC) and Møller-Plesset perturbation theories (MPPT) as well as their combinations automatically by means of the tensor contraction engine, which is a computerized symbolic algebra system. The DC-based energy expressions of the standard CC and MPPT methods and the CC methods augmented with a perturbation correction were proposed for up to high excitation orders [e.g., CCSDTQ, MP4, and CCSD(2) TQ ]. The numerical assessment for hydrogen halide chains, polyene chains, and first coordination sphere (C1) model of photoactive yellow protein has revealed that the DC-based correlation methods provide reliable correlation energies with significantly less computational cost than that of the conventional implementations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Linear and non-linear bias: predictions versus measurements
NASA Astrophysics Data System (ADS)
Hoffmann, K.; Bel, J.; Gaztañaga, E.
2017-02-01
We study the linear and non-linear bias parameters which determine the mapping between the distributions of galaxies and the full matter density fields, comparing different measurements and predictions. Associating galaxies with dark matter haloes in the Marenostrum Institut de Ciències de l'Espai (MICE) Grand Challenge N-body simulation, we directly measure the bias parameters by comparing the smoothed density fluctuations of haloes and matter in the same region at different positions as a function of smoothing scale. Alternatively, we measure the bias parameters by matching the probability distributions of halo and matter density fluctuations, which can be applied to observations. These direct bias measurements are compared to corresponding measurements from two-point and different third-order correlations, as well as predictions from the peak-background model, which we presented in previous papers using the same data. We find an overall variation of the linear bias measurements and predictions of ˜5 per cent with respect to results from two-point correlations for different halo samples with masses between ˜1012and1015 h-1 M⊙ at the redshifts z = 0.0 and 0.5. Variations between the second- and third-order bias parameters from the different methods show larger variations, but with consistent trends in mass and redshift. The various bias measurements reveal a tight relation between the linear and the quadratic bias parameters, which is consistent with results from the literature based on simulations with different cosmologies. Such a universal relation might improve constraints on cosmological models, derived from second-order clustering statistics at small scales or higher order clustering statistics.
Cosmological velocity correlations - Observations and model predictions
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.; Davis, Marc; Strauss, Michael A.; White, Simon D. M.; Yahil, Amos
1989-01-01
By applying the present simple statistics for two-point cosmological peculiar velocity-correlation measurements to the actual data sets of the Local Supercluster spiral galaxy of Aaronson et al. (1982) and the elliptical galaxy sample of Burstein et al. (1987), as well as to the velocity field predicted by the distribution of IRAS galaxies, a coherence length of 1100-1600 km/sec is obtained. Coherence length is defined as that separation at which the correlations drop to half their zero-lag value. These results are compared with predictions from two models of large-scale structure formation: that of cold dark matter and that of baryon isocurvature proposed by Peebles (1980). N-body simulations of these models are performed to check the linear theory predictions and measure sampling fluctuations.
General Model of Photon-Pair Detection with an Image Sensor
NASA Astrophysics Data System (ADS)
Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.
2018-05-01
We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.
The Angular Three-Point Correlation Function in the Quasi-linear Regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buchalter, Ari; Kamionkowski, Marc; Jaffe, Andrew H.
2000-02-10
We calculate the normalized angular three-point correlation function (3PCF), q, as well as the normalized angular skewness, s{sub 3}, assuming the small-angle approximation, for a biased mass distribution in flat and open cold dark matter (CDM) models with Gaussian initial conditions. The leading-order perturbative results incorporate the explicit dependence on the cosmological parameters, the shape of the CDM transfer function, the linear evolution of the power spectrum, the form of the assumed redshift distribution function, and linear and nonlinear biasing, which may be evolving. Results are presented for different redshift distributions, including that appropriate for the APM Galaxy Survey, asmore » well as for a survey with a mean redshift of z{approx_equal}1 (such as the VLA FIRST Survey). Qualitatively, many of the results found for s{sub 3} and q are similar to those obtained in a related treatment of the spatial skewness and 3PCF, such as a leading-order correction to the standard result for s{sub 3} in the case of nonlinear bias (as defined for unsmoothed density fields), and the sensitivity of the configuration dependence of q to both cosmological and biasing models. We show that since angular correlation functions (CFs) are sensitive to clustering over a range of redshifts, the various evolutionary dependences included in our predictions imply that measurements of q in a deep survey might better discriminate between models with different histories, such as evolving versus nonevolving bias, that can have similar spatial CFs at low redshift. Our calculations employ a derived equation, valid for open, closed, and flat models, to obtain the angular bispectrum from the spatial bispectrum in the small-angle approximation. (c) (c) 2000. The American Astronomical Society.« less
Analysis of Cross-Sectional Univariate Measurements for Family Dyads Using Linear Mixed Modeling
Knafl, George J.; Dixon, Jane K.; O'Malley, Jean P.; Grey, Margaret; Deatrick, Janet A.; Gallo, Agatha M.; Knafl, Kathleen A.
2010-01-01
Outcome measurements from members of the same family are likely correlated. Such intrafamilial correlation (IFC) is an important dimension of the family as a unit but is not always accounted for in analyses of family data. This article demonstrates the use of linear mixed modeling to account for IFC in the important special case of univariate measurements for family dyads collected at a single point in time. Example analyses of data from partnered parents having a child with a chronic condition on their child's adaptation to the condition and on the family's general functioning and management of the condition are provided. Analyses of this kind are reasonably straightforward to generate with popular statistical tools. Thus, it is recommended that IFC be reported as standard practice reflecting the fact that a family dyad is more than just the aggregate of two individuals. Moreover, not accounting for IFC can affect the conclusions. PMID:19307316
NASA Technical Reports Server (NTRS)
Langley, P. G.
1981-01-01
A method of relating different classifications at each stage of a multistage, multiresource inventory using remotely sensed imagery is discussed. A class transformation matrix allowing the conversion of a set of proportions at one stage, to a set of proportions at the subsequent stage through use of a linear model, is described. The technique was tested by applying it to Kershaw County, South Carolina. Unsupervised LANDSAT spectral classifications were correlated with interpretations of land use aerial photography, the correlations employed to estimate land use classifications using the linear model, and the land use proportions used to stratify current annual increment (CAI) field plot data to obtain a total CAI for the county. The estimate differed by 1% from the published figure for land use. Potential sediment loss and a variety of land use classifications were also obtained.
Caricato, Marco
2018-04-07
We report the theory and the implementation of the linear response function of the coupled cluster (CC) with the single and double excitations method combined with the polarizable continuum model of solvation, where the correlation solvent response is approximated with the perturbation theory with energy and singles density (PTES) scheme. The singles name is derived from retaining only the contribution of the CC single excitation amplitudes to the correlation density. We compare the PTES working equations with those of the full-density (PTED) method. We then test the PTES scheme on the evaluation of excitation energies and transition dipoles of solvated molecules, as well as of the isotropic polarizability and specific rotation. Our results show a negligible difference between the PTED and PTES schemes, while the latter affords a significantly reduced computational cost. This scheme is general and can be applied to any solvation model that includes mutual solute-solvent polarization, including explicit models. Therefore, the PTES scheme is a competitive approach to compute response properties of solvated systems using CC methods.
NASA Astrophysics Data System (ADS)
Caricato, Marco
2018-04-01
We report the theory and the implementation of the linear response function of the coupled cluster (CC) with the single and double excitations method combined with the polarizable continuum model of solvation, where the correlation solvent response is approximated with the perturbation theory with energy and singles density (PTES) scheme. The singles name is derived from retaining only the contribution of the CC single excitation amplitudes to the correlation density. We compare the PTES working equations with those of the full-density (PTED) method. We then test the PTES scheme on the evaluation of excitation energies and transition dipoles of solvated molecules, as well as of the isotropic polarizability and specific rotation. Our results show a negligible difference between the PTED and PTES schemes, while the latter affords a significantly reduced computational cost. This scheme is general and can be applied to any solvation model that includes mutual solute-solvent polarization, including explicit models. Therefore, the PTES scheme is a competitive approach to compute response properties of solvated systems using CC methods.
Estimation of value at risk in currency exchange rate portfolio using asymmetric GJR-GARCH Copula
NASA Astrophysics Data System (ADS)
Nurrahmat, Mohamad Husein; Noviyanti, Lienda; Bachrudin, Achmad
2017-03-01
In this study, we discuss the problem in measuring the risk in a portfolio based on value at risk (VaR) using asymmetric GJR-GARCH Copula. The approach based on the consideration that the assumption of normality over time for the return can not be fulfilled, and there is non-linear correlation for dependent model structure among the variables that lead to the estimated VaR be inaccurate. Moreover, the leverage effect also causes the asymmetric effect of dynamic variance and shows the weakness of the GARCH models due to its symmetrical effect on conditional variance. Asymmetric GJR-GARCH models are used to filter the margins while the Copulas are used to link them together into a multivariate distribution. Then, we use copulas to construct flexible multivariate distributions with different marginal and dependence structure, which is led to portfolio joint distribution does not depend on the assumptions of normality and linear correlation. VaR obtained by the analysis with confidence level 95% is 0.005586. This VaR derived from the best Copula model, t-student Copula with marginal distribution of t distribution.
An in-situ Raman study on pristane at high pressure and ambient temperature
NASA Astrophysics Data System (ADS)
Wu, Jia; Ni, Zhiyong; Wang, Shixia; Zheng, Haifei
2018-01-01
The Csbnd H Raman spectroscopic band (2800-3000 cm-1) of pristane was measured in a diamond anvil cell at 1.1-1532 MPa and ambient temperature. Three models are used for the peak-fitting of this Csbnd H Raman band, and the linear correlations between pressure and corresponding peak positions are calculated as well. The results demonstrate that 1) the number of peaks that one chooses to fit the spectrum affects the results, which indicates that the application of the spectroscopic barometry with a function group of organic matters suffers significant limitations; and 2) the linear correlation between pressure and fitted peak positions from one-peak model is more superior than that from multiple-peak model, meanwhile the standard error of the latter is much higher than that of the former. It indicates that the Raman shift of Csbnd H band fitted with one-peak model, which could be treated as a spectroscopic barometry, is more realistic in mixture systems than the traditional strategy which uses the Raman characteristic shift of one function group.
NASA Astrophysics Data System (ADS)
Huang, J.; Kang, Q.; Yang, J. X.; Jin, P. W.
2017-08-01
The surface runoff and soil infiltration exert significant influence on soil erosion. The effects of slope gradient/length (SG/SL), individual rainfall amount/intensity (IRA/IRI), vegetation cover (VC) and antecedent soil moisture (ASM) on the runoff depth (RD) and soil infiltration (INF) were evaluated in a series of natural rainfall experiments in the South of China. RD is found to correlate positively with IRA, IRI, and ASM factors and negatively with SG and VC. RD decreased followed by its increase with SG and ASM, it increased with a further decrease with SL, exhibited a linear growth with IRA and IRI, and exponential drop with VC. Meanwhile, INF exhibits a positive correlation with SL, IRA and IRI and VC, and a negative one with SG and ASM. INF was going up and then down with SG, linearly rising with SL, IRA and IRI, increasing by a logit function with VC, and linearly falling with ASM. The VC level above 60% can effectively lower the surface runoff and significantly enhance soil infiltration. Two RD and INF prediction models, accounting for the above six factors, were constructed using the multiple nonlinear regression method. The verification of those models disclosed a high Nash-Sutcliffe coefficient and low root-mean-square error, demonstrating good predictability of both models.
Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin
2011-01-01
Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292
HCMM hydrological analysis in Utah
NASA Technical Reports Server (NTRS)
Miller, A. W. (Principal Investigator)
1982-01-01
The feasibility of applying a linear model to HCMM data in hopes of obtaining an accurate linear correlation was investigated. The relationship among HCMM sensed data surface temperature and red reflectivity on Utah Lake and water quality factors including algae concentrations, algae type, and nutrient and turbidity concentrations was established and evaluated. Correlation (composite) images of day infrared and reflectance imagery were assessed to determine if remote sensing offers the capability of using masses of accurate and comprehensive data in calculating evaporation. The effects of algae on temperature and evaporation were studied and the possibility of using satellite thermal data to locate areas within Utah Lake where significant thermal sources exist and areas of near surface groundwater was examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gandhi, P.; Dhillon, V. S.; Durant, M.
2010-07-15
In a fast multi-wavelength timing study of black hole X-ray binaries (BHBs), we have discovered correlated optical and X-ray variability in the low/hard state of two sources: GX 339-4 and SWIFT J1753.5-0127. After XTE J1118+480, these are the only BHBs currently known to show rapid (sub-second) aperiodic optical flickering. Our simultaneous VLT/Ultracam and RXTE data reveal intriguing patterns with characteristic peaks, dips and lags down to very short timescales. Simple linear reprocessing models can be ruled out as the origin of the rapid, aperiodic optical power in both sources. A magnetic energy release model with fast interactions between the disk,more » jet and corona can explain the complex correlation patterns. We also show that in both the optical and X-ray light curves, the absolute source variability r.m.s. amplitude linearly increases with flux, and that the flares have a log-normal distribution. The implication is that variability at both wavelengths is not due to local fluctuations alone, but rather arises as a result of coupling of perturbations over a wide range of radii and timescales. These 'optical and X-ray rms-flux relations' thus provide new constraints to connect the outer and inner parts of the accretion flow, and the jet.« less
Palenzuela, D O; Benítez, J; Rivero, J; Serrano, R; Ganzó, O
1997-10-13
In the present work a concept proposed in 1992 by Dopotka and Giesendorf was applied to the quantitative analysis of antibodies to the p24 protein of HIV-1 in infected asymptomatic individuals and AIDS patients. Two approaches were analyzed, a linear model OD = b0 + b1.log(titer) and a nonlinear log(titer) = alpha.OD beta, similar to the Dopotka-Giesendorf's model. The above two proposed models adequately fit the dependence of the optical density values at a single point dilution, and titers achieved by the end point dilution method (EPDM). Nevertheless, the nonlinear model better fits the experimental data, according to residuals analysis. Classical EPDM was compared with the new single point dilution method (SPDM) using both models. The best correlation between titers calculated using both models and titers achieved by EPDM was obtained with the nonlinear model. The correlation coefficients for the nonlinear and linear models were r = 0.85 and r = 0.77, respectively. A new correction factor was introduced into the nonlinear model and this reduced the day-to-day variation of titer values. In general, SPDM saves time, reagents and is more precise and sensitive to changes in antibody levels, and therefore has a higher resolution than EPDM.
Analysis Of Navy Hornet Squadron Mishap Costs With Regard To Previously Flown Flight Hours
2017-06-01
mishaps occur more frequently in a squadron when flight hours are reduced. This thesis correlates F/A-18 Hornet and Super Hornet squadron previously... correlated to the flight hours flown during the previous three and six months. A linear multivariate model was developed and used to analyze a dataset...hours are reduced. This thesis correlates F/A-18 Hornet and Super Hornet squadron previously flown flight hours with mishap costs. It uses a macro
Linear and nonlinear methods in modeling the aqueous solubility of organic compounds.
Catana, Cornel; Gao, Hua; Orrenius, Christian; Stouten, Pieter F W
2005-01-01
Solubility data for 930 diverse compounds have been analyzed using linear Partial Least Square (PLS) and nonlinear PLS methods, Continuum Regression (CR), and Neural Networks (NN). 1D and 2D descriptors from MOE package in combination with E-state or ISIS keys have been used. The best model was obtained using linear PLS for a combination between 22 MOE descriptors and 65 ISIS keys. It has a correlation coefficient (r2) of 0.935 and a root-mean-square error (RMSE) of 0.468 log molar solubility (log S(w)). The model validated on a test set of 177 compounds not included in the training set has r2 0.911 and RMSE 0.475 log S(w). The descriptors were ranked according to their importance, and at the top of the list have been found the 22 MOE descriptors. The CR model produced results as good as PLS, and because of the way in which cross-validation has been done it is expected to be a valuable tool in prediction besides PLS model. The statistics obtained using nonlinear methods did not surpass those got with linear ones. The good statistic obtained for linear PLS and CR recommends these models to be used in prediction when it is difficult or impossible to make experimental measurements, for virtual screening, combinatorial library design, and efficient leads optimization.
Hydration and vibrational dynamics of betaine (N,N,N-trimethylglycine)
NASA Astrophysics Data System (ADS)
Li, Tanping; Cui, Yaowen; Mathaga, John; Kumar, Revati; Kuroda, Daniel G.
2015-06-01
Zwitterions are naturally occurring molecules that have a positive and a negative charge group in its structure and are of great importance in many areas of science. Here, the vibrational and hydration dynamics of the zwitterionic system betaine (N,N,N-trimethylglycine) is reported. The linear infrared spectrum of aqueous betaine exhibits an asymmetric band in the 1550-1700 cm-1 region of the spectrum. This band is attributed to the carboxylate asymmetric stretch of betaine. The potential of mean force computed from ab initio molecular dynamic simulations confirms that the two observed transitions of the linear spectrum are related to two different betaine conformers present in solution. A model of the experimental data using non-linear response theory agrees very well with a vibrational model comprising of two vibrational transitions. In addition, our modeling shows that spectral parameters such as the slope of the zeroth contour plot and central line slope are both sensitive to the presence of overlapping transitions. The vibrational dynamics of the system reveals an ultrafast decay of the vibrational population relaxation as well as the correlation of frequency-frequency correlation function (FFCF). A decay of ˜0.5 ps is observed for the FFCF correlation time and is attributed to the frequency fluctuations caused by the motions of water molecules in the solvation shell. The comparison of the experimental observations with simulations of the FFCF from ab initio molecular dynamics and a density functional theory frequency map shows a very good agreement corroborating the correct characterization and assignment of the derived parameters.
Genetic overlap between diagnostic subtypes of ischemic stroke.
Holliday, Elizabeth G; Traylor, Matthew; Malik, Rainer; Bevan, Steve; Falcone, Guido; Hopewell, Jemma C; Cheng, Yu-Ching; Cotlarciuc, Ioana; Bis, Joshua C; Boerwinkle, Eric; Boncoraglio, Giorgio B; Clarke, Robert; Cole, John W; Fornage, Myriam; Furie, Karen L; Ikram, M Arfan; Jannes, Jim; Kittner, Steven J; Lincz, Lisa F; Maguire, Jane M; Meschia, James F; Mosley, Thomas H; Nalls, Mike A; Oldmeadow, Christopher; Parati, Eugenio A; Psaty, Bruce M; Rothwell, Peter M; Seshadri, Sudha; Scott, Rodney J; Sharma, Pankaj; Sudlow, Cathie; Wiggins, Kerri L; Worrall, Bradford B; Rosand, Jonathan; Mitchell, Braxton D; Dichgans, Martin; Markus, Hugh S; Levi, Christopher; Attia, John; Wray, Naomi R
2015-03-01
Despite moderate heritability, the phenotypic heterogeneity of ischemic stroke has hampered gene discovery, motivating analyses of diagnostic subtypes with reduced sample sizes. We assessed evidence for a shared genetic basis among the 3 major subtypes: large artery atherosclerosis (LAA), cardioembolism, and small vessel disease (SVD), to inform potential cross-subtype analyses. Analyses used genome-wide summary data for 12 389 ischemic stroke cases (including 2167 LAA, 2405 cardioembolism, and 1854 SVD) and 62 004 controls from the Metastroke consortium. For 4561 cases and 7094 controls, individual-level genotype data were also available. Genetic correlations between subtypes were estimated using linear mixed models and polygenic profile scores. Meta-analysis of a combined LAA-SVD phenotype (4021 cases and 51 976 controls) was performed to identify shared risk alleles. High genetic correlation was identified between LAA and SVD using linear mixed models (rg=0.96, SE=0.47, P=9×10(-4)) and profile scores (rg=0.72; 95% confidence interval, 0.52-0.93). Between LAA and cardioembolism and SVD and cardioembolism, correlation was moderate using linear mixed models but not significantly different from zero for profile scoring. Joint meta-analysis of LAA and SVD identified strong association (P=1×10(-7)) for single nucleotide polymorphisms near the opioid receptor μ1 (OPRM1) gene. Our results suggest that LAA and SVD, which have been hitherto treated as genetically distinct, may share a substantial genetic component. Combined analyses of LAA and SVD may increase power to identify small-effect alleles influencing shared pathophysiological processes. © 2015 American Heart Association, Inc.
Hydration and vibrational dynamics of betaine (N,N,N-trimethylglycine)
Li, Tanping; Cui, Yaowen; Mathaga, John; Kumar, Revati; Kuroda, Daniel G.
2015-01-01
Zwitterions are naturally occurring molecules that have a positive and a negative charge group in its structure and are of great importance in many areas of science. Here, the vibrational and hydration dynamics of the zwitterionic system betaine (N,N,N-trimethylglycine) is reported. The linear infrared spectrum of aqueous betaine exhibits an asymmetric band in the 1550-1700 cm−1 region of the spectrum. This band is attributed to the carboxylate asymmetric stretch of betaine. The potential of mean force computed from ab initio molecular dynamic simulations confirms that the two observed transitions of the linear spectrum are related to two different betaine conformers present in solution. A model of the experimental data using non-linear response theory agrees very well with a vibrational model comprising of two vibrational transitions. In addition, our modeling shows that spectral parameters such as the slope of the zeroth contour plot and central line slope are both sensitive to the presence of overlapping transitions. The vibrational dynamics of the system reveals an ultrafast decay of the vibrational population relaxation as well as the correlation of frequency-frequency correlation function (FFCF). A decay of ∼0.5 ps is observed for the FFCF correlation time and is attributed to the frequency fluctuations caused by the motions of water molecules in the solvation shell. The comparison of the experimental observations with simulations of the FFCF from ab initio molecular dynamics and a density functional theory frequency map shows a very good agreement corroborating the correct characterization and assignment of the derived parameters. PMID:26049458
Qing, Si-han; Chang, Yun-feng; Dong, Xiao-ai; Li, Yuan; Chen, Xiao-gang; Shu, Yong-kang; Deng, Zhen-hua
2013-10-01
To establish the mathematical models of stature estimation for Sichuan Han female with measurement of lumbar vertebrae by X-ray to provide essential data for forensic anthropology research. The samples, 206 Sichuan Han females, were divided into three groups including group A, B and C according to the ages. Group A (206 samples) consisted of all ages, group B (116 samples) were 20-45 years old and 90 samples over 45 years old were group C. All the samples were examined lumbar vertebrae through CR technology, including the parameters of five centrums (L1-L5) as anterior border, posterior border and central heights (x1-x15), total central height of lumbar spine (x16), and the real height of every sample. The linear regression analysis was produced using the parameters to establish the mathematical models of stature estimation. Sixty-two trained subjects were tested to verify the accuracy of the mathematical models. The established mathematical models by hypothesis test of linear regression equation model were statistically significant (P<0.05). The standard errors of the equation were 2.982-5.004 cm, while correlation coefficients were 0.370-0.779 and multiple correlation coefficients were 0.533-0.834. The return tests of the highest correlation coefficient and multiple correlation coefficient of each group showed that the highest accuracy of the multiple regression equation, y = 100.33 + 1.489 x3 - 0.548 x6 + 0.772 x9 + 0.058 x12 + 0.645 x15, in group A were 80.6% (+/- lSE) and 100% (+/- 2SE). The established mathematical models in this study could be applied for the stature estimation for Sichuan Han females.
Roy, Banibrata; Ripstein, Ira; Perry, Kyle; Cohen, Barry
2016-01-01
To determine whether the pre-medical Grade Point Average (GPA), Medical College Admission Test (MCAT), Internal examinations (Block) and National Board of Medical Examiners (NBME) scores are correlated with and predict the Medical Council of Canada Qualifying Examination Part I (MCCQE-1) scores. Data from 392 admitted students in the graduating classes of 2010-2013 at University of Manitoba (UofM), College of Medicine was considered. Pearson's correlation to assess the strength of the relationship, multiple linear regression to estimate MCCQE-1 score and stepwise linear regression to investigate the amount of variance were employed. Complete data from 367 (94%) students were studied. The MCCQE-1 had a moderate-to-large positive correlation with NBME scores and Block scores but a low correlation with GPA and MCAT scores. The multiple linear regression model gives a good estimate of the MCCQE-1 (R2 =0.604). Stepwise regression analysis demonstrated that 59.2% of the variation in the MCCQE-1 was accounted for by the NBME, but only 1.9% by the Block exams, and negligible variation came from the GPA and the MCAT. Amongst all the examinations used at UofM, the NBME is most closely correlated with MCCQE-1.
NASA Astrophysics Data System (ADS)
Guarnieri, R.; Padilha, L.; Guarnieri, F.; Echer, E.; Makita, K.; Pinheiro, D.; Schuch, A.; Boeira, L.; Schuch, N.
Ultraviolet radiation type B (UV-B 280-315nm) is well known by its damage to life on Earth, including the possibility of causing skin cancer in humans. However, the atmo- spheric ozone has absorption bands in this spectral radiation, reducing its incidence on Earth's surface. Therefore, the ozone amount is one of the parameters, besides clouds, aerosols, solar zenith angles, altitude, albedo, that determine the UV-B radia- tion intensity reaching the Earth's surface. The total ozone column, in Dobson Units, determined by TOMS spectrometer on board of a NASA satellite, and UV-B radiation measurements obtained by a UV-B radiometer model MS-210W (Eko Instruments) were correlated. The measurements were obtained at the Observatório Espacial do Sul - Instituto Nacional de Pesquisas Espaciais (OES/CRSPE/INPE-MCT) coordinates: Lat. 29.44oS, Long. 53.82oW. The correlations were made using UV-B measurements in fixed solar zenith angles and only days with clear sky were selected in a period from July 1999 to December 2001. Moreover, the mathematic behavior of correlation in dif- ferent angles was observed, and correlation coefficients were determined by linear and first order exponential fits. In both fits, high correlation coefficients values were ob- tained, and the difference between linear and exponential fit can be considered small.
NASA Technical Reports Server (NTRS)
Schuecker, Clara; Davila, Carlos G.; Pettermann, Heinz E.
2008-01-01
The present work is concerned with modeling the non-linear response of fiber reinforced polymer laminates. Recent experimental data suggests that the non-linearity is not only caused by matrix cracking but also by matrix plasticity due to shear stresses. To capture the effects of those two mechanisms, a model combining a plasticity formulation with continuum damage has been developed to simulate the non-linear response of laminates under plane stress states. The model is used to compare the predicted behavior of various laminate lay-ups to experimental data from the literature by looking at the degradation of axial modulus and Poisson s ratio of the laminates. The influence of residual curing stresses and in-situ effect on the predicted response is also investigated. It is shown that predictions of the combined damage/plasticity model, in general, correlate well with the experimental data. The test data shows that there are two different mechanisms that can have opposite effects on the degradation of the laminate Poisson s ratio which is captured correctly by the damage/plasticity model. Residual curing stresses are found to have a minor influence on the predicted response for the cases considered here. Some open questions remain regarding the prediction of damage onset.
Simplified planar model of a car steering system with rack and pinion and McPherson suspension
NASA Astrophysics Data System (ADS)
Knapczyk, J.; Kucybała, P.
2016-09-01
The paper presents the analysis and optimization of steering system with rack and pinion and McPherson suspension using spatial model and equivalent simplified planar model. The dimension of the steering linkage that give minimum steering error can be estimated using planar model. The steering error is defined as the difference between the actual angle made by the outer front wheel during steering manoeuvers and the calculated angle for the same wheel based on the Ackerman principle. For a given linear rack displacement, a specified steering arms angular displacements are determined while simultaneously ensuring best transmission angle characteristics (i) without and (ii) with imposing linear correlation between input and output. Numerical examples are used to illustrate the proposed method.
Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.
2011-01-01
Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252
Sexual Response Models: Toward a More Flexible Pattern of Women's Sexuality.
Ferenidou, Fotini; Kirana, Paraskevi-Sofia; Fokas, Konstantinos; Hatzichristou, Dimitrios; Athanasiadis, Loukas
2016-09-01
Recent research suggests that none of the current theoretical models can sufficiently describe women's sexual response, because several factors and situations can influence this. To explore individual variations of a sexual model that describes women's sexual responses and to assess the association of endorsement of that model with sexual dysfunctions and reasons to engage in sexual activity. A sample of 157 randomly selected hospital employees completed self-administered questionnaires. Two models were developed: one merged the Master and Johnson model with the Kaplan model (linear) and the other was the Basson model (circular). Sexual function was evaluated by the Female Sexual Function Index and the Brief Sexual Symptom Checklist for Women. The Reasons for Having Sex Questionnaire was administered to investigate the reasons for which women have sex. Women reported that their current sexual experiences were at times consistent with the linear and circular models (66.9%), only the linear model (27%), only the circular model (5.4%), and neither model (0.7%). When the groups were reconfigured to the group that endorsed more than 5 of 10 sexual experiences, 64.3% of women endorsed the linear model, 20.4% chose the linear and circular models, 14.6% chose the circular model, and 0.7% selected neither. The Female Sexual Function Index, demographic factors, having sex for insecurity reasons, and sexual satisfaction correlated with the endorsement of a sexual response model. When these factors were entered in a stepwise logistic regression analysis, only the Female Sexual Function Index and having sex for insecurity reasons maintained a significant association with the sexual response model. The present study emphasizes the heterogeneity of female sexuality, with most of the sample reporting alternating between the linear and circular models. Sexual dysfunctions and having sex for insecurity reasons were associated with the Basson model. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
Algebraic approach to electronic spectroscopy and dynamics.
Toutounji, Mohamad
2008-04-28
Lie algebra, Zassenhaus, and parameter differentiation techniques are utilized to break up the exponential of a bilinear Hamiltonian operator into a product of noncommuting exponential operators by the virtue of the theory of Wei and Norman [J. Math. Phys. 4, 575 (1963); Proc. Am. Math. Soc., 15, 327 (1964)]. There are about three different ways to find the Zassenhaus exponents, namely, binomial expansion, Suzuki formula, and q-exponential transformation. A fourth, and most reliable method, is provided. Since linearly displaced and distorted (curvature change upon excitation/emission) Hamiltonian and spin-boson Hamiltonian may be classified as bilinear Hamiltonians, the presented algebraic algorithm (exponential operator disentanglement exploiting six-dimensional Lie algebra case) should be useful in spin-boson problems. The linearly displaced and distorted Hamiltonian exponential is only treated here. While the spin-boson model is used here only as a demonstration of the idea, the herein approach is more general and powerful than the specific example treated. The optical linear dipole moment correlation function is algebraically derived using the above mentioned methods and coherent states. Coherent states are eigenvectors of the bosonic lowering operator a and not of the raising operator a(+). While exp(a(+)) translates coherent states, exp(a(+)a(+)) operation on coherent states has always been a challenge, as a(+) has no eigenvectors. Three approaches, and the results, of that operation are provided. Linear absorption spectra are derived, calculated, and discussed. The linear dipole moment correlation function for the pure quadratic coupling case is expressed in terms of Legendre polynomials to better show the even vibronic transitions in the absorption spectrum. Comparison of the present line shapes to those calculated by other methods is provided. Franck-Condon factors for both linear and quadratic couplings are exactly accounted for by the herein calculated linear absorption spectra. This new methodology should easily pave the way to calculating the four-point correlation function, F(tau(1),tau(2),tau(3),tau(4)), of which the optical nonlinear response function may be procured, as evaluating F(tau(1),tau(2),tau(3),tau(4)) is only evaluating the optical linear dipole moment correlation function iteratively over different time intervals, which should allow calculating various optical nonlinear temporal/spectral signals.
X-56A MUTT: Aeroservoelastic Modeling
NASA Technical Reports Server (NTRS)
Ouellette, Jeffrey A.
2015-01-01
For the NASA X-56a Program, Armstrong Flight Research Center has been developing a set of linear states space models that integrate the flight dynamics and structural dynamics. These high order models are needed for the control design, control evaluation, and test input design. The current focus has been on developing stiff wing models to validate the current modeling approach. The extension of the modeling approach to the flexible wings requires only a change in the structural model. Individual subsystems models (actuators, inertial properties, etc.) have been validated by component level ground tests. Closed loop simulation of maneuvers designed to validate the flight dynamics of these models correlates very well flight test data. The open loop structural dynamics are also shown to correlate well to the flight test data.
NASA Astrophysics Data System (ADS)
Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott
2017-09-01
We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.
Determination of suitable drying curve model for bread moisture loss during baking
NASA Astrophysics Data System (ADS)
Soleimani Pour-Damanab, A. R.; Jafary, A.; Rafiee, S.
2013-03-01
This study presents mathematical modelling of bread moisture loss or drying during baking in a conventional bread baking process. In order to estimate and select the appropriate moisture loss curve equation, 11 different models, semi-theoretical and empirical, were applied to the experimental data and compared according to their correlation coefficients, chi-squared test and root mean square error which were predicted by nonlinear regression analysis. Consequently, of all the drying models, a Page model was selected as the best one, according to the correlation coefficients, chi-squared test, and root mean square error values and its simplicity. Mean absolute estimation error of the proposed model by linear regression analysis for natural and forced convection modes was 2.43, 4.74%, respectively.
Valence-bond theory of linear Hubbard and Pariser-Parr-Pople models
NASA Astrophysics Data System (ADS)
Soos, Z. G.; Ramasesha, S.
1984-05-01
The ground and low-lying states of finite quantum-cell models with one state per site are obtained exactly through a real-space basis of valence-bond (VB) diagrams that explicitly conserve the total spin. Regular and alternating Hubbard and Pariser-Parr-Pople (PPP) chains and rings with Ne electrons on N(<=12) sites are extrapolated to infinite arrays. The ground-state energy and optical gap of regular U=4|t| Hubbard chains agree with exact results, suggesting comparable accuracy for alternating Hubbard and PPP models, but differ from mean-field results. Molecular PPP parameters describe well the excitations of finite polyenes, odd polyene ions, linear cyanine dyes, and slightly overestimate the absorption peaks in polyacetylene (CH)x. Molecular correlations contrast sharply with uncorrelated descriptions of topological solitons, which are modeled by regular polyene radicals and their ions for both wide and narrow alternation crossovers. Neutral solitons have no midgap absorption and negative spin densities, while the intensity of the in-gap excitation of charged solitons is not enhanced. The properties of correlated states in quantum-cell models with one valence state per site are discussed in the adiabatic limit for excited-state geometries and instabilities to dimerization.
Doyle, Jennifer L; Berry, Donagh P; Walsh, Siobhan W; Veerkamp, Roel F; Evans, Ross D; Carthy, Tara R
2018-05-04
Linear type traits describing the skeletal, muscular, and functional characteristics of an animal are routinely scored on live animals in both the dairy and beef cattle industries. Previous studies have demonstrated that genetic parameters for certain performance traits may differ between breeds; no study, however, has attempted to determine if differences exist in genetic parameters of linear type traits among breeds or sexes. Therefore, the objective of the present study was to determine if genetic covariance components for linear type traits differed among five contrasting cattle breeds, and to also investigate if these components differed by sex. A total of 18 linear type traits scored on 3,356 Angus (AA), 31,049 Charolais (CH), 3,004 Hereford (HE), 35,159 Limousin (LM), and 8,632 Simmental (SI) were used in the analysis. Data were analyzed using animal linear mixed models which included the fixed effects of sex of the animal (except in the investigation into the presence of sexual dimorphism), age at scoring, parity of the dam, and contemporary group of herd-date of scoring. Differences (P < 0.05) in heritability estimates, between at least two breeds, existed for 13 out of 18 linear type traits. Differences (P < 0.05) also existed between the pairwise within-breed genetic correlations among the linear type traits. Overall, the linear type traits in the continental breeds (i.e., CH, LM, SI) tended to have similar heritability estimates to each other as well as similar genetic correlations among the same pairwise traits, as did the traits in the British breeds (i.e., AA, HE). The correlation between a linear function of breeding values computed conditional on covariance parameters estimated from the CH breed with a linear function of breeding values computed conditional on covariance parameters estimated from the other breeds was estimated. Replacing the genetic covariance components estimated in the CH breed with those of the LM had least effect but the impact was considerable when the genetic covariance components of the AA were used. Genetic correlations between the same linear type traits in the two sexes were all close to unity (≥0.90) suggesting little advantage in considering these as separate traits for males and females. Results for the present study indicate the potential increase in accuracy of estimated breeding value prediction from considering, at least, the British breed traits separate to continental breed traits.
Kaimakamis, Evangelos; Tsara, Venetia; Bratsas, Charalambos; Sichletidis, Lazaros; Karvounis, Charalambos; Maglaveras, Nikolaos
2016-01-01
Obstructive Sleep Apnea (OSA) is a common sleep disorder requiring the time/money consuming polysomnography for diagnosis. Alternative methods for initial evaluation are sought. Our aim was the prediction of Apnea-Hypopnea Index (AHI) in patients potentially suffering from OSA based on nonlinear analysis of respiratory biosignals during sleep, a method that is related to the pathophysiology of the disorder. Patients referred to a Sleep Unit (135) underwent full polysomnography. Three nonlinear indices (Largest Lyapunov Exponent, Detrended Fluctuation Analysis and Approximate Entropy) extracted from two biosignals (airflow from a nasal cannula, thoracic movement) and one linear derived from Oxygen saturation provided input to a data mining application with contemporary classification algorithms for the creation of predictive models for AHI. A linear regression model presented a correlation coefficient of 0.77 in predicting AHI. With a cutoff value of AHI = 8, the sensitivity and specificity were 93% and 71.4% in discrimination between patients and normal subjects. The decision tree for the discrimination between patients and normal had sensitivity and specificity of 91% and 60%, respectively. Certain obtained nonlinear values correlated significantly with commonly accepted physiological parameters of people suffering from OSA. We developed a predictive model for the presence/severity of OSA using a simple linear equation and additional decision trees with nonlinear features extracted from 3 respiratory recordings. The accuracy of the methodology is high and the findings provide insight to the underlying pathophysiology of the syndrome. Reliable predictions of OSA are possible using linear and nonlinear indices from only 3 respiratory signals during sleep. The proposed models could lead to a better study of the pathophysiology of OSA and facilitate initial evaluation/follow up of suspected patients OSA utilizing a practical low cost methodology. ClinicalTrials.gov NCT01161381.
Flora, David B.; LaBrish, Cathy; Chalmers, R. Philip
2011-01-01
We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables. PMID:22403561
Mi, Jia; Li, Jie; Zhang, Qinglu; Wang, Xing; Liu, Hongyu; Cao, Yanlu; Liu, Xiaoyan; Sun, Xiao; Shang, Mengmeng; Liu, Qing
2016-01-01
Abstract The purpose of the study was to establish a mathematical model for correlating the combination of ultrasonography and noncontrast helical computerized tomography (NCHCT) with the total energy of Holmium laser lithotripsy. In this study, from March 2013 to February 2014, 180 patients with single urinary calculus were examined using ultrasonography and NCHCT before Holmium laser lithotripsy. The calculus location and size, acoustic shadowing (AS) level, twinkling artifact intensity (TAI), and CT value were all documented. The total energy of lithotripsy (TEL) and the calculus composition were also recorded postoperatively. Data were analyzed using Spearman's rank correlation coefficient, with the SPSS 17.0 software package. Multiple linear regression was also used for further statistical analysis. A significant difference in the TEL was observed between renal calculi and ureteral calculi (r = –0.565, P < 0.001), and there was a strong correlation between the calculus size and the TEL (r = 0.675, P < 0.001). The difference in the TEL between the calculi with and without AS was highly significant (r = 0.325, P < 0.001). The CT value of the calculi was significantly correlated with the TEL (r = 0.386, P < 0.001). A correlation between the TAI and TEL was also observed (r = 0.391, P < 0.001). Multiple linear regression analysis revealed that the location, size, and TAI of the calculi were related to the TEL, and the location and size were statistically significant predictors (adjusted r2 = 0.498, P < 0.001). A mathematical model correlating the combination of ultrasonography and NCHCT with TEL was established; this model may provide a foundation to guide the use of energy in Holmium laser lithotripsy. The TEL can be estimated by the location, size, and TAI of the calculus. PMID:27930563
NASA Astrophysics Data System (ADS)
Woldesellasse, H. T.; Marpu, P. R.; Ouarda, T.
2016-12-01
Wind is one of the crucial renewable energy sources which is expected to bring solutions to the challenges of clean energy and the global issue of climate change. A number of linear and nonlinear multivariate techniques has been used to predict the stochastic character of wind speed. A wind forecast with good accuracy has a positive impact on the reduction of electricity system cost and is essential for the effective grid management. Over the past years, few studies have been done on the assessment of teleconnections and its possible effects on the long-term wind speed variability in the UAE region. In this study Nonlinear Canonical Correlation Analysis (NLCCA) method is applied to study the relationship between global climate oscillation indices and meteorological variables, with a major emphasis on wind speed and wind direction, of Abu Dhabi, UAE. The wind dataset was obtained from six ground stations. The first mode of NLCCA is capable of capturing the nonlinear mode of the climate indices at different seasons, showing the symmetry between the warm states and the cool states. The strength of the nonlinear canonical correlation between the two sets of variables varies with the lead/lag time. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE) and Mean absolute error (MAE). The results indicated that NLCCA models provide more accurate information about the nonlinear intrinsic behaviour of the dataset of variables than linear CCA model in terms of the correlation and root mean square error. Key words: Nonlinear Canonical Correlation Analysis (NLCCA), Canonical Correlation Analysis, Neural Network, Climate Indices, wind speed, wind direction
Regression Models For Multivariate Count Data
Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei
2016-01-01
Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data. PMID:28348500
Regression Models For Multivariate Count Data.
Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei
2017-01-01
Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.
Posa, Mihalj; Pilipović, Ana; Lalić, Mladena; Popović, Jovan
2011-02-15
Linear dependence between temperature (t) and retention coefficient (k, reversed phase HPLC) of bile acids is obtained. Parameters (a, intercept and b, slope) of the linear function k=f(t) highly correlate with bile acids' structures. Investigated bile acids form linear congeneric groups on a principal component (calculated from k=f(t)) score plot that are in accordance with conformations of the hydroxyl and oxo groups in a bile acid steroid skeleton. Partition coefficient (K(p)) of nitrazepam in bile acids' micelles is investigated. Nitrazepam molecules incorporated in micelles show modified bioavailability (depo effect, higher permeability, etc.). Using multiple linear regression method QSAR models of nitrazepams' partition coefficient, K(p) are derived on the temperatures of 25°C and 37°C. For deriving linear regression models on both temperatures experimentally obtained lipophilicity parameters are included (PC1 from data k=f(t)) and in silico descriptors of the shape of a molecule while on the higher temperature molecular polarisation is introduced. This indicates the fact that the incorporation mechanism of nitrazepam in BA micelles changes on the higher temperatures. QSAR models are derived using partial least squares method as well. Experimental parameters k=f(t) are shown to be significant predictive variables. Both QSAR models are validated using cross validation and internal validation method. PLS models have slightly higher predictive capability than MLR models. Copyright © 2010 Elsevier B.V. All rights reserved.
[Dental arch form reverting by four-point method].
Pan, Xiao-Gang; Qian, Yu-Fen; Weng, Si-En; Feng, Qi-Ping; Yu, Quan
2008-04-01
To explore a simple method of reverting individual dental arch form template for wire bending. Individual dental arch form was reverted by four-point method. By defining central point of bracket on bilateral lower second premolar and first molar, certain individual dental arch form could be generated. The arch form generating procedure was then be developed to computer software for printing arch form. Four-point method arch form was evaluated by comparing with direct model measurement on linear and angular parameters. The accuracy and reproducibility were assessed by paired t test and concordance correlation coefficient with Medcalc 9.3 software package. The arch form by four-point method was of good accuracy and reproducibility (linear concordance correlation coefficient was 0.9909 and angular concordance correlation coefficient was 0.8419). The dental arch form reverted by four-point method could reproduce the individual dental arch form.
Correlation of Respirator Fit Measured on Human Subjects and a Static Advanced Headform
Bergman, Michael S.; He, Xinjian; Joseph, Michael E.; Zhuang, Ziqing; Heimbuch, Brian K.; Shaffer, Ronald E.; Choe, Melanie; Wander, Joseph D.
2015-01-01
This study assessed the correlation of N95 filtering face-piece respirator (FFR) fit between a Static Advanced Headform (StAH) and 10 human test subjects. Quantitative fit evaluations were performed on test subjects who made three visits to the laboratory. On each visit, one fit evaluation was performed on eight different FFRs of various model/size variations. Additionally, subject breathing patterns were recorded. Each fit evaluation comprised three two-minute exercises: “Normal Breathing,” “Deep Breathing,” and again “Normal Breathing.” The overall test fit factors (FF) for human tests were recorded. The same respirator samples were later mounted on the StAH and the overall test manikin fit factors (MFF) were assessed utilizing the recorded human breathing patterns. Linear regression was performed on the mean log10-transformed FF and MFF values to assess the relationship between the values obtained from humans and the StAH. This is the first study to report a positive correlation of respirator fit between a headform and test subjects. The linear regression by respirator resulted in R2 = 0.95, indicating a strong linear correlation between FF and MFF. For all respirators the geometric mean (GM) FF values were consistently higher than those of the GM MFF. For 50% of respirators, GM FF and GM MFF values were significantly different between humans and the StAH. For data grouped by subject/respirator combinations, the linear regression resulted in R2 = 0.49. A weaker correlation (R2 = 0.11) was found using only data paired by subject/respirator combination where both the test subject and StAH had passed a real-time leak check before performing the fit evaluation. For six respirators, the difference in passing rates between the StAH and humans was < 20%, while two respirators showed a difference of 29% and 43%. For data by test subject, GM FF and GM MFF values were significantly different for 40% of the subjects. Overall, the advanced headform system has potential for assessing fit for some N95 FFR model/sizes. PMID:25265037
Nonconsensus opinion model on directed networks
NASA Astrophysics Data System (ADS)
Qu, Bo; Li, Qian; Havlin, Shlomo; Stanley, H. Eugene; Wang, Huijuan
2014-11-01
Dynamic social opinion models have been widely studied on undirected networks, and most of them are based on spin interaction models that produce a consensus. In reality, however, many networks such as Twitter and the World Wide Web are directed and are composed of both unidirectional and bidirectional links. Moreover, from choosing a coffee brand to deciding who to vote for in an election, two or more competing opinions often coexist. In response to this ubiquity of directed networks and the coexistence of two or more opinions in decision-making situations, we study a nonconsensus opinion model introduced by Shao et al. [Phys. Rev. Lett. 103, 018701 (2009), 10.1103/PhysRevLett.103.018701] on directed networks. We define directionality ξ as the percentage of unidirectional links in a network, and we use the linear correlation coefficient ρ between the in-degree and out-degree of a node to quantify the relation between the in-degree and out-degree. We introduce two degree-preserving rewiring approaches which allow us to construct directed networks that can have a broad range of possible combinations of directionality ξ and linear correlation coefficient ρ and to study how ξ and ρ impact opinion competitions. We find that, as the directionality ξ or the in-degree and out-degree correlation ρ increases, the majority opinion becomes more dominant and the minority opinion's ability to survive is lowered.
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Errico, Ronald; Gelaro, Ronaldo; Kim, Jong G.
2013-01-01
Inclusion of moist physics in the linearized version of a weather forecast model is beneficial in terms of variational data assimilation. Further, it improves the capability of important tools, such as adjoint-based observation impacts and sensitivity studies. A linearized version of the relaxed Arakawa-Schubert (RAS) convection scheme has been developed and tested in NASA's Goddard Earth Observing System data assimilation tools. A previous study of the RAS scheme showed it to exhibit reasonable linearity and stability. This motivates the development of a linearization of a near-exact version of the RAS scheme. Linearized large-scale condensation is included through simple conversion of supersaturation into precipitation. The linearization of moist physics is validated against the full nonlinear model for 6- and 24-h intervals, relevant to variational data assimilation and observation impacts, respectively. For a small number of profiles, sudden large growth in the perturbation trajectory is encountered. Efficient filtering of these profiles is achieved by diagnosis of steep gradients in a reduced version of the operator of the tangent linear model. With filtering turned on, the inclusion of linearized moist physics increases the correlation between the nonlinear perturbation trajectory and the linear approximation of the perturbation trajectory. A month-long observation impact experiment is performed and the effect of including moist physics on the impacts is discussed. Impacts from moist-sensitive instruments and channels are increased. The effect of including moist physics is examined for adjoint sensitivity studies. A case study examining an intensifying Northern Hemisphere Atlantic storm is presented. The results show a significant sensitivity with respect to moisture.
Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S
2017-08-01
Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.
NASA Astrophysics Data System (ADS)
Khatonabadi, Maryam; Zhang, Di; Yang, Jeffrey; DeMarco, John J.; Cagnon, Chris C.; McNitt-Gray, Michael F.
2012-03-01
Recently published AAPM Task Group 204 developed conversion coefficients that use scanner reported CTDIvol to estimate dose to the center of patient undergoing fixed tube current body exam. However, most performed CT exams use TCM to reduce dose to patients. Therefore, the purpose of this study was to investigate the correlation between organ dose and a variety of patient size metrics in adult chest CT scans that use tube current modulation (TCM). Monte Carlo simulations were performed for 32 voxelized models with contoured lungs and glandular breasts tissue, consisting of females and males. These simulations made use of patient's actual TCM data to estimate organ dose. Using image data, different size metrics were calculated, these measurements were all performed on one slice, at the level of patient's nipple. Estimated doses were normalized by scanner-reported CTDIvol and plotted versus different metrics. CTDIvol values were plotted versus different metrics to look at scanner's output versus size. The metrics performed similarly in terms of correlating with organ dose. Looking at each gender separately, for male models normalized lung dose showed a better linear correlation (r2=0.91) with effective diameter, while female models showed higher correlation (r2=0.59) with the anterior-posterior measurement. There was essentially no correlation observed between size and CTDIvol-normalized breast dose. However, a linear relationship was observed between absolute breast dose and size. Dose to lungs and breasts were consistently higher in females with similar size as males which could be due to shape and composition differences between genders in the thoracic region.
Modeling Pan Evaporation for Kuwait by Multiple Linear Regression
Almedeij, Jaber
2012-01-01
Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984
Learning quadratic receptive fields from neural responses to natural stimuli.
Rajan, Kanaka; Marre, Olivier; Tkačik, Gašper
2013-07-01
Models of neural responses to stimuli with complex spatiotemporal correlation structure often assume that neurons are selective for only a small number of linear projections of a potentially high-dimensional input. In this review, we explore recent modeling approaches where the neural response depends on the quadratic form of the input rather than on its linear projection, that is, the neuron is sensitive to the local covariance structure of the signal preceding the spike. To infer this quadratic dependence in the presence of arbitrary (e.g., naturalistic) stimulus distribution, we review several inference methods, focusing in particular on two information theory-based approaches (maximization of stimulus energy and of noise entropy) and two likelihood-based approaches (Bayesian spike-triggered covariance and extensions of generalized linear models). We analyze the formal relationship between the likelihood-based and information-based approaches to demonstrate how they lead to consistent inference. We demonstrate the practical feasibility of these procedures by using model neurons responding to a flickering variance stimulus.
Shahinfar, Saleh; Mehrabani-Yeganeh, Hassan; Lucas, Caro; Kalhor, Ahmad; Kazemian, Majid; Weigel, Kent A.
2012-01-01
Developing machine learning and soft computing techniques has provided many opportunities for researchers to establish new analytical methods in different areas of science. The objective of this study is to investigate the potential of two types of intelligent learning methods, artificial neural networks and neuro-fuzzy systems, in order to estimate breeding values (EBV) of Iranian dairy cattle. Initially, the breeding values of lactating Holstein cows for milk and fat yield were estimated using conventional best linear unbiased prediction (BLUP) with an animal model. Once that was established, a multilayer perceptron was used to build ANN to predict breeding values from the performance data of selection candidates. Subsequently, fuzzy logic was used to form an NFS, a hybrid intelligent system that was implemented via a local linear model tree algorithm. For milk yield the correlations between EBV and EBV predicted by the ANN and NFS were 0.92 and 0.93, respectively. Corresponding correlations for fat yield were 0.93 and 0.93, respectively. Correlations between multitrait predictions of EBVs for milk and fat yield when predicted simultaneously by ANN were 0.93 and 0.93, respectively, whereas corresponding correlations with reference EBV for multitrait NFS were 0.94 and 0.95, respectively, for milk and fat production. PMID:22991575
Carbonell, Felix; Bellec, Pierre; Shmuel, Amir
2011-01-01
The influence of the global average signal (GAS) on functional-magnetic resonance imaging (fMRI)-based resting-state functional connectivity is a matter of ongoing debate. The global average fluctuations increase the correlation between functional systems beyond the correlation that reflects their specific functional connectivity. Hence, removal of the GAS is a common practice for facilitating the observation of network-specific functional connectivity. This strategy relies on the implicit assumption of a linear-additive model according to which global fluctuations, irrespective of their origin, and network-specific fluctuations are super-positioned. However, removal of the GAS introduces spurious negative correlations between functional systems, bringing into question the validity of previous findings of negative correlations between fluctuations in the default-mode and the task-positive networks. Here we present an alternative method for estimating global fluctuations, immune to the complications associated with the GAS. Principal components analysis was applied to resting-state fMRI time-series. A global-signal effect estimator was defined as the principal component (PC) that correlated best with the GAS. The mean correlation coefficient between our proposed PC-based global effect estimator and the GAS was 0.97±0.05, demonstrating that our estimator successfully approximated the GAS. In 66 out of 68 runs, the PC that showed the highest correlation with the GAS was the first PC. Since PCs are orthogonal, our method provides an estimator of the global fluctuations, which is uncorrelated to the remaining, network-specific fluctuations. Moreover, unlike the regression of the GAS, the regression of the PC-based global effect estimator does not introduce spurious anti-correlations beyond the decrease in seed-based correlation values allowed by the assumed additive model. After regressing this PC-based estimator out of the original time-series, we observed robust anti-correlations between resting-state fluctuations in the default-mode and the task-positive networks. We conclude that resting-state global fluctuations and network-specific fluctuations are uncorrelated, supporting a Resting-State Linear-Additive Model. In addition, we conclude that the network-specific resting-state fluctuations of the default-mode and task-positive networks show artifact-free anti-correlations.
A human visual model-based approach of the visual attention and performance evaluation
NASA Astrophysics Data System (ADS)
Le Meur, Olivier; Barba, Dominique; Le Callet, Patrick; Thoreau, Dominique
2005-03-01
In this paper, a coherent computational model of visual selective attention for color pictures is described and its performances are precisely evaluated. The model based on some important behaviours of the human visual system is composed of four parts: visibility, perception, perceptual grouping and saliency map construction. This paper focuses mainly on its performances assessment by achieving extended subjective and objective comparisons with real fixation points captured by an eye-tracking system used by the observers in a task-free viewing mode. From the knowledge of the ground truth, qualitatively and quantitatively comparisons have been made in terms of the measurement of the linear correlation coefficient (CC) and of the Kulback Liebler divergence (KL). On a set of 10 natural color images, the results show that the linear correlation coefficient and the Kullback Leibler divergence are of about 0.71 and 0.46, respectively. CC and Kl measures with this model are respectively improved by about 4% and 7% compared to the best model proposed by L.Itti. Moreover, by comparing the ability of our model to predict eye movements produced by an average observer, we can conclude that our model succeeds quite well in predicting the spatial locations of the most important areas of the image content.
Dopamine-dependent non-linear correlation between subthalamic rhythms in Parkinson's disease.
Marceglia, S; Foffani, G; Bianchi, A M; Baselli, G; Tamma, F; Egidi, M; Priori, A
2006-03-15
The basic information architecture in the basal ganglia circuit is under debate. Whereas anatomical studies quantify extensive convergence/divergence patterns in the circuit, suggesting an information sharing scheme, neurophysiological studies report an absence of linear correlation between single neurones in normal animals, suggesting a segregated parallel processing scheme. In 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-treated monkeys and in parkinsonian patients single neurones become linearly correlated, thus leading to a loss of segregation between neurones. Here we propose a possible integrative solution to this debate, by extending the concept of functional segregation from the cellular level to the network level. To this end, we recorded local field potentials (LFPs) from electrodes implanted for deep brain stimulation (DBS) in the subthalamic nucleus (STN) of parkinsonian patients. By applying bispectral analysis, we found that in the absence of dopamine stimulation STN LFP rhythms became non-linearly correlated, thus leading to a loss of segregation between rhythms. Non-linear correlation was particularly consistent between the low-beta rhythm (13-20 Hz) and the high-beta rhythm (20-35 Hz). Levodopa administration significantly decreased these non-linear correlations, therefore increasing segregation between rhythms. These results suggest that the extensive convergence/divergence in the basal ganglia circuit is physiologically necessary to sustain LFP rhythms distributed in large ensembles of neurones, but is not sufficient to induce correlated firing between neurone pairs. Conversely, loss of dopamine generates pathological linear correlation between neurone pairs, alters the patterns within LFP rhythms, and induces non-linear correlation between LFP rhythms operating at different frequencies. The pathophysiology of information processing in the human basal ganglia therefore involves not only activities of individual rhythms, but also interactions between rhythms.
Dopamine-dependent non-linear correlation between subthalamic rhythms in Parkinson's disease
Marceglia, S; Foffani, G; Bianchi, A M; Baselli, G; Tamma, F; Egidi, M; Priori, A
2006-01-01
The basic information architecture in the basal ganglia circuit is under debate. Whereas anatomical studies quantify extensive convergence/divergence patterns in the circuit, suggesting an information sharing scheme, neurophysiological studies report an absence of linear correlation between single neurones in normal animals, suggesting a segregated parallel processing scheme. In 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-treated monkeys and in parkinsonian patients single neurones become linearly correlated, thus leading to a loss of segregation between neurones. Here we propose a possible integrative solution to this debate, by extending the concept of functional segregation from the cellular level to the network level. To this end, we recorded local field potentials (LFPs) from electrodes implanted for deep brain stimulation (DBS) in the subthalamic nucleus (STN) of parkinsonian patients. By applying bispectral analysis, we found that in the absence of dopamine stimulation STN LFP rhythms became non-linearly correlated, thus leading to a loss of segregation between rhythms. Non-linear correlation was particularly consistent between the low-beta rhythm (13–20 Hz) and the high-beta rhythm (20–35 Hz). Levodopa administration significantly decreased these non-linear correlations, therefore increasing segregation between rhythms. These results suggest that the extensive convergence/divergence in the basal ganglia circuit is physiologically necessary to sustain LFP rhythms distributed in large ensembles of neurones, but is not sufficient to induce correlated firing between neurone pairs. Conversely, loss of dopamine generates pathological linear correlation between neurone pairs, alters the patterns within LFP rhythms, and induces non-linear correlation between LFP rhythms operating at different frequencies. The pathophysiology of information processing in the human basal ganglia therefore involves not only activities of individual rhythms, but also interactions between rhythms. PMID:16410285
Comparison of Dst Forecast Models for Intense Geomagnetic Storms
NASA Technical Reports Server (NTRS)
Ji, Eun-Young; Moon, Y.-J.; Gopalswamy, N.; Lee, D.-H.
2012-01-01
We have compared six disturbance storm time (Dst) forecast models using 63 intense geomagnetic storms (Dst <=100 nT) that occurred from 1998 to 2006. For comparison, we estimated linear correlation coefficients and RMS errors between the observed Dst data and the predicted Dst during the geomagnetic storm period as well as the difference of the value of minimum Dst (Delta Dst(sub min)) and the difference in the absolute value of Dst minimum time (Delta t(sub Dst)) between the observed and the predicted. As a result, we found that the model by Temerin and Li gives the best prediction for all parameters when all 63 events are considered. The model gives the average values: the linear correlation coefficient of 0.94, the RMS error of 14.8 nT, the Delta Dst(sub min) of 7.7 nT, and the absolute value of Delta t(sub Dst) of 1.5 hour. For further comparison, we classified the storm events into two groups according to the magnitude of Dst. We found that the model of Temerin and Lee is better than the other models for the events having 100 <= Dst < 200 nT, and three recent models (the model of Wang et al., the model of Temerin and Li, and the model of Boynton et al.) are better than the other three models for the events having Dst <= 200 nT.
Robust Models for Optic Flow Coding in Natural Scenes Inspired by Insect Biology
Brinkworth, Russell S. A.; O'Carroll, David C.
2009-01-01
The extraction of accurate self-motion information from the visual world is a difficult problem that has been solved very efficiently by biological organisms utilizing non-linear processing. Previous bio-inspired models for motion detection based on a correlation mechanism have been dogged by issues that arise from their sensitivity to undesired properties of the image, such as contrast, which vary widely between images. Here we present a model with multiple levels of non-linear dynamic adaptive components based directly on the known or suspected responses of neurons within the visual motion pathway of the fly brain. By testing the model under realistic high-dynamic range conditions we show that the addition of these elements makes the motion detection model robust across a large variety of images, velocities and accelerations. Furthermore the performance of the entire system is more than the incremental improvements offered by the individual components, indicating beneficial non-linear interactions between processing stages. The algorithms underlying the model can be implemented in either digital or analog hardware, including neuromorphic analog VLSI, but defy an analytical solution due to their dynamic non-linear operation. The successful application of this algorithm has applications in the development of miniature autonomous systems in defense and civilian roles, including robotics, miniature unmanned aerial vehicles and collision avoidance sensors. PMID:19893631
How does non-linear dynamics affect the baryon acoustic oscillation?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugiyama, Naonori S.; Spergel, David N., E-mail: nao.s.sugiyama@gmail.com, E-mail: dns@astro.princeton.edu
2014-02-01
We study the non-linear behavior of the baryon acoustic oscillation in the power spectrum and the correlation function by decomposing the dark matter perturbations into the short- and long-wavelength modes. The evolution of the dark matter fluctuations can be described as a global coordinate transformation caused by the long-wavelength displacement vector acting on short-wavelength matter perturbation undergoing non-linear growth. Using this feature, we investigate the well known cancellation of the high-k solutions in the standard perturbation theory. While the standard perturbation theory naturally satisfies the cancellation of the high-k solutions, some of the recently proposed improved perturbation theories do notmore » guarantee the cancellation. We show that this cancellation clarifies the success of the standard perturbation theory at the 2-loop order in describing the amplitude of the non-linear power spectrum even at high-k regions. We propose an extension of the standard 2-loop level perturbation theory model of the non-linear power spectrum that more accurately models the non-linear evolution of the baryon acoustic oscillation than the standard perturbation theory. The model consists of simple and intuitive parts: the non-linear evolution of the smoothed power spectrum without the baryon acoustic oscillations and the non-linear evolution of the baryon acoustic oscillations due to the large-scale velocity of dark matter and due to the gravitational attraction between dark matter particles. Our extended model predicts the smoothing parameter of the baryon acoustic oscillation peak at z = 0.35 as ∼ 7.7Mpc/h and describes the small non-linear shift in the peak position due to the galaxy random motions.« less
Bernardo, R
1996-11-01
Best linear unbiased prediction (BLUP) has been found to be useful in maize (Zea mays L.) breeding. The advantage of including both testcross additive and dominance effects (Intralocus Model) in BLUP, rather than only testcross additive effects (Additive Model), has not been clearly demonstrated. The objective of this study was to compare the usefulness of Intralocus and Additive Models for BLUP of maize single-cross performance. Multilocation data from 1990 to 1995 were obtained from the hybrid testing program of Limagrain Genetics. Grain yield, moisture, stalk lodging, and root lodging of untested single crosses were predicted from (1) the performance of tested single crosses and (2) known genetic relationships among the parental inbreds. Correlations between predicted and observed performance were obtained with a delete-one cross-validation procedure. For the Intralocus Model, the correlations ranged from 0.50 to 0.66 for yield, 0.88 to 0.94 for moisture, 0.47 to 0.69 for stalk lodging, and 0.31 to 0.45 for root lodging. The BLUP procedure was consistently more effective with the Intralocus Model than with the Additive Model. When the Additive Model was used instead of the Intralocus Model, the reductions in the correlation were largest for root lodging (0.06-0.35), smallest for moisture (0.00-0.02), and intermediate for yield (0.02-0.06) and stalk lodging (0.02-0.08). The ratio of dominance variance (v D) to total genetic variance (v G) was highest for root lodging (0.47) and lowest for moisture (0.10). The Additive Model may be used if prior information indicates that VD for a given trait has little contribution to VG. Otherwise, the continued use of the Intralocus Model for BLUP of single-cross performance is recommended.
Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H
2015-11-30
We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.
Separate-channel analysis of two-channel microarrays: recovering inter-spot information.
Smyth, Gordon K; Altman, Naomi S
2013-05-26
Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.
Forsberg, Flemming; Ro, Raymond J.; Fox, Traci B; Liu, Ji-Bin; Chiou, See-Ying; Potoczek, Magdalena; Goldberg, Barry B
2010-01-01
The purpose of this study was to prospectively compare noninvasive, quantitative measures of vascularity obtained from 4 contrast enhanced ultrasound (US) techniques to 4 invasive immunohistochemical markers of tumor angiogenesis in a large group of murine xenografts. Glioma (C6) or breast cancer (NMU) cells were implanted in 144 rats. The contrast agent Optison (GE Healthcare, Princeton, NJ) was injected in a tail vein (dose: 0.4ml/kg). Power Doppler imaging (PDI), pulse-subtraction harmonic imaging (PSHI), flash-echo imaging (FEI), and Microflow imaging (MFI; a technique creating maximum intensity projection images over time) was performed with an Aplio scanner (Toshiba America Medical Systems, Tustin, CA) and a 7.5 MHz linear array. Fractional tumor neovascularity was calculated from digital clips of contrast US, while the relative area stained was calculated from specimens. Results were compared using a factorial, repeated measures ANOVA, linear regression and z-tests. The tortuous morphology of tumor neovessels was visualized better with MFI than with the other US modes. Cell line, implantation method and contrast US imaging technique were significant parameters in the ANOVA model (p<0.05). The strongest correlation determined by linear regression in the C6 model was between PSHI and percent area stained with CD31 (r=0.37, p<0.0001). In the NMU model the strongest correlation was between FEI and COX-2 (r=0.46, p<0.0001). There were no statistically significant differences between correlations obtained with the various US methods (p>0.05). In conclusion, the largest study of contrast US of murine xenografts to date has been conducted and quantitative contrast enhanced US measures of tumor neovascularity in glioma and breast cancer xenograft models appear to provide a noninvasive marker for angiogenesis; although the best method for monitoring angiogenesis was not conclusively established. PMID:21144542
Decorrelation of Neural-Network Activity by Inhibitory Feedback
Einevoll, Gaute T.; Diesmann, Markus
2012-01-01
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II). PMID:23133368
NASA Astrophysics Data System (ADS)
Most, S.; Nowak, W.; Bijeljic, B.
2014-12-01
Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.
Gene set analysis using variance component tests.
Huang, Yen-Tsung; Lin, Xihong
2013-06-28
Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-05
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.
Correlation and simple linear regression.
Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G
2003-06-01
In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.
Correlates of early pregnancy serum brain-derived neurotrophic factor in a Peruvian population.
Yang, Na; Levey, Elizabeth; Gelaye, Bizu; Zhong, Qiu-Yue; Rondon, Marta B; Sanchez, Sixto E; Williams, Michelle A
2017-12-01
Knowledge about factors that influence serum brain-derived neurotrophic factor (BDNF) concentrations during early pregnancy is lacking. The aim of the study is to examine the correlates of early pregnancy serum BDNF concentrations. A total of 982 women attending prenatal care clinics in Lima, Peru, were recruited in early pregnancy. Pearson's correlation coefficient was calculated to evaluate the relation between BDNF concentrations and continuous covariates. Analysis of variance and generalized linear models were used to compare the unadjusted and adjusted BDNF concentrations according to categorical variables. Multivariable linear regression models were applied to determine the factors that influence early pregnancy serum BDNF concentrations. In bivariate analysis, early pregnancy serum BDNF concentrations were positively associated with maternal age (r = 0.16, P < 0.001) and early pregnancy body mass index (BMI) (r = 0.17, P < 0.001), but inversely correlated with gestational age at sample collection (r = -0.21, P < 0.001) and C-reactive protein (CRP) concentrations (r = -0.07, P < 0.05). In the multivariable linear regression model, maternal age (β = 0.11, P = 0.001), early pregnancy BMI (β = 1.58, P < 0.001), gestational age at blood collection (β = -0.33, P < 0.001), and serum CRP concentrations (β = -0.57, P = 0.002) were significantly associated with early pregnancy serum BDNF concentrations. Participants with moderate antepartum depressive symptoms (Patient Health Questionnaire-9 (PHQ-9) score ≥ 10) had lower serum BDNF concentrations compared with participants with no/mild antepartum depressive symptoms (PHQ-9 score < 10). Maternal age, early pregnancy BMI, gestational age, and the presence of moderate antepartum depressive symptoms were statistically significantly associated with early pregnancy serum BDNF concentrations in low-income Peruvian women. Biological changes of CRP during pregnancy may affect serum BDNF concentrations.
NASA Astrophysics Data System (ADS)
Hahlbeck, N.; Scales, K. L.; Hazen, E. L.; Bograd, S. J.
2016-12-01
The reduction of bycatch, or incidental capture of non-target species in a fishery, is a key objective of ecosystem-based fisheries management (EBFM) and critical to the conservation of many threatened marine species. Prediction of bycatch events is therefore of great importance to EBFM efforts. Here, bycatch of the ocean sunfish (Mola mola) and bluefin tuna (Thunnus thynnus) in the California drift gillnet fishery is modeled using a suite of remotely sensed environmental variables as predictors. Data from 8321 gillnet sets was aggregated by month to reduce zero inflation and autocorrelation among sets, and a set of a priori generalized additive models (GAMs) was created for each species based on literature review and preliminary data exploration. Each of the models was fit using a binomial family with a logit link in R, and Aikake's Information Criterion with correction (AICc) was used in the first stage of model selection. K-fold cross validation was used in the second stage of model selection and performance assessment, using the least-squares linear model of predicted vs. observed values as the performance metric. The best-performing mola model indicated a strong, nearly linear negative correlation with sea surface temperature, as well as weaker nonlinear correlations with eddy kinetic energy, chlorophyll-a concentration and rugosity. These findings are consistent with current understanding of ocean sunfish habitat use; for example, previous studies suggest seasonal movement patterns and exploitation of dynamic, highly productive areas characteristic of upwelling regions. Preliminary results from the bluefin models also indicate seasonal fluctuation and correlation with environmental variables. These models can be used with near-real time satellite data as bycatch avoidance tools for both fishers and managers, allowing for the use of more dynamic ocean management strategies to improve sustainability of the fishery.
NASA Astrophysics Data System (ADS)
Hahlbeck, N.; Scales, K. L.; Hazen, E. L.; Bograd, S. J.
2016-02-01
The reduction of bycatch, or incidental capture of non-target species in a fishery, is a key objective of ecosystem-based fisheries management (EBFM) and critical to the conservation of many threatened marine species. Prediction of bycatch events is therefore of great importance to EBFM efforts. Here, bycatch of the ocean sunfish (Mola mola) and bluefin tuna (Thunnus thynnus) in the California drift gillnet fishery is modeled using a suite of remotely sensed environmental variables as predictors. Data from 8321 gillnet sets was aggregated by month to reduce zero inflation and autocorrelation among sets, and a set of a priori generalized additive models (GAMs) was created for each species based on literature review and preliminary data exploration. Each of the models was fit using a binomial family with a logit link in R, and Aikake's Information Criterion with correction (AICc) was used in the first stage of model selection. K-fold cross validation was used in the second stage of model selection and performance assessment, using the least-squares linear model of predicted vs. observed values as the performance metric. The best-performing mola model indicated a strong, nearly linear negative correlation with sea surface temperature, as well as weaker nonlinear correlations with eddy kinetic energy, chlorophyll-a concentration and rugosity. These findings are consistent with current understanding of ocean sunfish habitat use; for example, previous studies suggest seasonal movement patterns and exploitation of dynamic, highly productive areas characteristic of upwelling regions. Preliminary results from the bluefin models also indicate seasonal fluctuation and correlation with environmental variables. These models can be used with near-real time satellite data as bycatch avoidance tools for both fishers and managers, allowing for the use of more dynamic ocean management strategies to improve sustainability of the fishery.
Testing a single regression coefficient in high dimensional linear models
Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2017-01-01
In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668
Testing a single regression coefficient in high dimensional linear models.
Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2016-11-01
In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.
NASA Astrophysics Data System (ADS)
Rachmatia, H.; Kusuma, W. A.; Hasibuan, L. S.
2017-05-01
Selection in plant breeding could be more effective and more efficient if it is based on genomic data. Genomic selection (GS) is a new approach for plant-breeding selection that exploits genomic data through a mechanism called genomic prediction (GP). Most of GP models used linear methods that ignore effects of interaction among genes and effects of higher order nonlinearities. Deep belief network (DBN), one of the architectural in deep learning methods, is able to model data in high level of abstraction that involves nonlinearities effects of the data. This study implemented DBN for developing a GP model utilizing whole-genome Single Nucleotide Polymorphisms (SNPs) as data for training and testing. The case study was a set of traits in maize. The maize dataset was acquisitioned from CIMMYT’s (International Maize and Wheat Improvement Center) Global Maize program. Based on Pearson correlation, DBN is outperformed than other methods, kernel Hilbert space (RKHS) regression, Bayesian LASSO (BL), best linear unbiased predictor (BLUP), in case allegedly non-additive traits. DBN achieves correlation of 0.579 within -1 to 1 range.
Forecasting Effusive Dynamics and Decompression Rates by Magmastatic Model at Open-vent Volcanoes.
Ripepe, Maurizio; Pistolesi, Marco; Coppola, Diego; Delle Donne, Dario; Genco, Riccardo; Lacanna, Giorgio; Laiolo, Marco; Marchetti, Emanuele; Ulivieri, Giacomo; Valade, Sébastien
2017-06-20
Effusive eruptions at open-conduit volcanoes are interpreted as reactions to a disequilibrium induced by the increase in magma supply. By comparing four of the most recent effusive eruptions at Stromboli volcano (Italy), we show how the volumes of lava discharged during each eruption are linearly correlated to the topographic positions of the effusive vents. This correlation cannot be explained by an excess of pressure within a deep magma chamber and raises questions about the actual contributions of deep magma dynamics. We derive a general model based on the discharge of a shallow reservoir and the magmastatic crustal load above the vent, to explain the linear link. In addition, we show how the drastic transition from effusive to violent explosions can be related to different decompression rates. We suggest that a gravity-driven model can shed light on similar cases of lateral effusive eruptions in other volcanic systems and can provide evidence of the roles of slow decompression rates in triggering violent paroxysmal explosive eruptions, which occasionally punctuate the effusive phases at basaltic volcanoes.
A constitutive model for the warp-weft coupled non-linear behavior of knitted biomedical textiles.
Yeoman, Mark S; Reddy, Daya; Bowles, Hellmut C; Bezuidenhout, Deon; Zilla, Peter; Franz, Thomas
2010-11-01
Knitted textiles have been used in medical applications due to their high flexibility and low tendency to fray. Their mechanics have, however, received limited attention. A constitutive model for soft tissue using a strain energy function was extended, by including shear and increasing the number and order of coefficients, to represent the non-linear warp-weft coupled mechanics of coarse textile knits under uniaxial tension. The constitutive relationship was implemented in a commercial finite element package. The model and its implementation were verified and validated for uniaxial tension and simple shear using patch tests and physical test data of uniaxial tensile tests of four very different knitted fabric structures. A genetic algorithm with step-wise increase in resolution and linear reduction in range of the search space was developed for the optimization of the fabric model coefficients. The numerically predicted stress-strain curves exhibited non-linear stiffening characteristic for fabrics. For three fabrics, the predicted mechanics correlated well with physical data, at least in one principal direction (warp or weft), and moderately in the other direction. The model exhibited limitations in approximating the linear elastic behavior of the fourth fabric. With proposals to address this limitation and to incorporate time-dependent changes in the fabric mechanics associated with tissue ingrowth, the constitutive model offers a tool for the design of tissue regenerative knit textile implants. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W; Xia, Yinglin; Zhu, Liang; Tu, Xin M
2011-09-10
The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. Copyright © 2011 John Wiley & Sons, Ltd.
VizieR Online Data Catalog: HARPS timeseries data for HD41248 (Jenkins+, 2014)
NASA Astrophysics Data System (ADS)
Jenkins, J. S.; Tuomi, M.
2017-05-01
We modeled the HARPS radial velocities of HD 42148 by adopting the analysis techniques and the statistical model applied in Tuomi et al. (2014, arXiv:1405.2016). This model contains Keplerian signals, a linear trend, a moving average component with exponential smoothing, and linear correlations with activity indices, namely, BIS, FWHM, and chromospheric activity S index. We applied our statistical model outlined above to the full data set of radial velocities for HD 41248, combining the previously published data in Jenkins et al. (2013ApJ...771...41J) with the newly published data in Santos et al. (2014, J/A+A/566/A35), giving rise to a total time series of 223 HARPS (Mayor et al. 2003Msngr.114...20M) velocities. (1 data file).
Bayesian Travel Time Inversion adopting Gaussian Process Regression
NASA Astrophysics Data System (ADS)
Mauerberger, S.; Holschneider, M.
2017-12-01
A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.
Zhang, Hua; Kurgan, Lukasz
2014-12-01
Knowledge of protein flexibility is vital for deciphering the corresponding functional mechanisms. This knowledge would help, for instance, in improving computational drug design and refinement in homology-based modeling. We propose a new predictor of the residue flexibility, which is expressed by B-factors, from protein chains that use local (in the chain) predicted (or native) relative solvent accessibility (RSA) and custom-derived amino acid (AA) alphabets. Our predictor is implemented as a two-stage linear regression model that uses RSA-based space in a local sequence window in the first stage and a reduced AA pair-based space in the second stage as the inputs. This method is easy to comprehend explicit linear form in both stages. Particle swarm optimization was used to find an optimal reduced AA alphabet to simplify the input space and improve the prediction performance. The average correlation coefficients between the native and predicted B-factors measured on a large benchmark dataset are improved from 0.65 to 0.67 when using the native RSA values and from 0.55 to 0.57 when using the predicted RSA values. Blind tests that were performed on two independent datasets show consistent improvements in the average correlation coefficients by a modest value of 0.02 for both native and predicted RSA-based predictions.
Liang, Gaozhen; Dong, Chunwang; Hu, Bin; Zhu, Hongkai; Yuan, Haibo; Jiang, Yongwen; Hao, Guoshuang
2018-05-18
Withering is the first step in the processing of congou black tea. With respect to the deficiency of traditional water content detection methods, a machine vision based NDT (Non Destructive Testing) method was established to detect the moisture content of withered leaves. First, according to the time sequences using computer visual system collected visible light images of tea leaf surfaces, and color and texture characteristics are extracted through the spatial changes of colors. Then quantitative prediction models for moisture content detection of withered tea leaves was established through linear PLS (Partial Least Squares) and non-linear SVM (Support Vector Machine). The results showed correlation coefficients higher than 0.8 between the water contents and green component mean value (G), lightness component mean value (L * ) and uniformity (U), which means that the extracted characteristics have great potential to predict the water contents. The performance parameters as correlation coefficient of prediction set (Rp), root-mean-square error of prediction (RMSEP), and relative standard deviation (RPD) of the SVM prediction model are 0.9314, 0.0411 and 1.8004, respectively. The non-linear modeling method can better describe the quantitative analytical relations between the image and water content. With superior generalization and robustness, the method would provide a new train of thought and theoretical basis for the online water content monitoring technology of automated production of black tea.
Correlations among Brain Gray Matter Volumes, Age, Gender, and Hemisphere in Healthy Individuals
Taki, Yasuyuki; Thyreau, Benjamin; Kinomura, Shigeo; Sato, Kazunori; Goto, Ryoi; Kawashima, Ryuta; Fukuda, Hiroshi
2011-01-01
To determine the relationship between age and gray matter structure and how interactions between gender and hemisphere impact this relationship, we examined correlations between global or regional gray matter volume and age, including interactions of gender and hemisphere, using a general linear model with voxel-based and region-of-interest analyses. Brain magnetic resonance images were collected from 1460 healthy individuals aged 20–69 years; the images were linearly normalized and segmented and restored to native space for analysis of global gray matter volume. Linearly normalized images were then non-linearly normalized and smoothed for analysis of regional gray matter volume. Analysis of global gray matter volume revealed a significant negative correlation between gray matter ratio (gray matter volume divided by intracranial volume) and age in both genders, and a significant interaction effect of age × gender on the gray matter ratio. In analyzing regional gray matter volume, the gray matter volume of all regions showed significant main effects of age, and most regions, with the exception of several including the inferior parietal lobule, showed a significant age × gender interaction. Additionally, the inferior temporal gyrus showed a significant age × gender × hemisphere interaction. No regional volumes showed significant age × hemisphere interactions. Our study may contribute to clarifying the mechanism(s) of normal brain aging in each brain region. PMID:21818377
Alternative approaches to predicting methane emissions from dairy cows.
Mills, J A N; Kebreab, E; Yates, C M; Crompton, L A; Cammell, S B; Dhanoa, M S; Agnew, R E; France, J
2003-12-01
Previous attempts to apply statistical models, which correlate nutrient intake with methane production, have been of limited value where predictions are obtained for nutrient intakes and diet types outside those used in model construction. Dynamic mechanistic models have proved more suitable for extrapolation, but they remain computationally expensive and are not applied easily in practical situations. The first objective of this research focused on employing conventional techniques to generate statistical models of methane production appropriate to United Kingdom dairy systems. The second objective was to evaluate these models and a model published previously using both United Kingdom and North American data sets. Thirdly, nonlinear models were considered as alternatives to the conventional linear regressions. The United Kingdom calorimetry data used to construct the linear models also were used to develop the three nonlinear alternatives that were all of modified Mitscherlich (monomolecular) form. Of the linear models tested, an equation from the literature proved most reliable across the full range of evaluation data (root mean square prediction error = 21.3%). However, the Mitscherlich models demonstrated the greatest degree of adaptability across diet types and intake level. The most successful model for simulating the independent data was a modified Mitscherlich equation with the steepness parameter set to represent dietary starch-to-ADF ratio (root mean square prediction error = 20.6%). However, when such data were unavailable, simpler Mitscherlich forms relating dry matter or metabolizable energy intake to methane production remained better alternatives relative to their linear counterparts.
Desai, Rishi J; Solomon, Daniel H; Weinblatt, Michael E; Shadick, Nancy; Kim, Seoyoung C
2015-04-13
We conducted an external validation study to examine the correlation of a previously published claims-based index for rheumatoid arthritis severity (CIRAS) with disease activity score in 28 joints calculated by using C-reactive protein (DAS28-CRP) and the multi-dimensional health assessment questionnaire (MD-HAQ) physical function score. Patients enrolled in the Brigham and Women's Hospital Rheumatoid Arthritis Sequential Study (BRASS) and Medicare were identified and their data from these two sources were linked. For each patient, DAS28-CRP measurement and MD-HAQ physical function scores were extracted from BRASS, and CIRAS was calculated from Medicare claims for the period of 365 days prior to the DAS28-CRP measurement. Pearson correlation coefficient between CIRAS and DAS28-CRP as well as MD-HAQ physical function scores were calculated. Furthermore, we considered several additional pharmacy and medical claims-derived variables as predictors for DAS28-CRP in a multivariable linear regression model in order to assess improvement in the performance of the original CIRAS algorithm. In total, 315 patients with enrollment in both BRASS and Medicare were included in this study. The majority (81%) of the cohort was female, and the mean age was 70 years. The correlation between CIRAS and DAS28-CRP was low (Pearson correlation coefficient = 0.07, P = 0.24). The correlation between the calculated CIRAS and MD-HAQ physical function scores was also found to be low (Pearson correlation coefficient = 0.08, P = 0.17). The linear regression model containing additional claims-derived variables yielded model R(2) of 0.23, suggesting limited ability of this model to explain variation in DAS28-CRP. In a cohort of Medicare-enrolled patients with established RA, CIRAS showed low correlation with DAS28-CRP as well as MD-HAQ physical function scores. Claims-based algorithms for disease activity should be rigorously tested in distinct populations in order to establish their generalizability before widespread adoption.
Heintze, Siegward D; Ilie, Nicoleta; Hickel, Reinhard; Reis, Alessandra; Loguercio, Alessandro; Rousson, Valentin
2017-03-01
To evaluate a range of mechanical parameters of composite resins and compare the data to the frequency of fractures and wear in clinical studies. Based on a search of PubMed and SCOPUS, clinical studies on posterior composite restorations were investigated with regard to bias by two independent reviewers using Cochrane Collaboration's tool for assessing risk of bias in randomized trials. The target variables were chipping and/or fracture, loss of anatomical form (wear) and a combination of both (summary clinical index). These outcomes were modelled by time and material in a linear mixed effect model including random study and experiment effects. The laboratory data from one test institute were used: flexural strength, flexural modulus, compressive strength, and fracture toughness (all after 24-h storage in distilled water). For some materials flexural strength data after aging in water/saliva/ethanol were available. Besides calculating correlations between clinical and laboratory outcomes, we explored whether a model including a laboratory predictor dichotomized at a cut-off value better predicted a clinical outcome than a linear model. A total of 74 clinical experiments from 45 studies were included involving 31 materials for which laboratory data were also available. A weak positive correlation between fracture toughness and clinical fractures was found (Spearman rho=0.34, p=0.11) in addition to a moderate and statistically significant correlation between flexural strength and clinical wear (Spearman rho=0.46, p=0.01). When excluding those studies with "high" risk of bias (n=18), the correlations were generally weaker with no statistically significant correlation. For aging in ethanol, a very strong correlation was found between flexural strength decrease and clinical index, but this finding was based on only 7 materials (Spearman rho=0.96, p=0.0001). Prediction was not consistently improved with cutoff values. Correlations between clinical and laboratory outcomes were moderately positive with few significant results, fracture toughness being correlated with clinical fractures and flexural strength with clinical wear. Whether artificial aging enhances the prognostic value needs further investigations. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Heddam, Salim
2014-01-01
In this study, we present application of an artificial intelligence (AI) technique model called dynamic evolving neural-fuzzy inference system (DENFIS) based on an evolving clustering method (ECM), for modelling dissolved oxygen concentration in a river. To demonstrate the forecasting capability of DENFIS, a one year period from 1 January 2009 to 30 December 2009, of hourly experimental water quality data collected by the United States Geological Survey (USGS Station No: 420853121505500) station at Klamath River at Miller Island Boat Ramp, OR, USA, were used for model development. Two DENFIS-based models are presented and compared. The two DENFIS systems are: (1) offline-based system named DENFIS-OF, and (2) online-based system, named DENFIS-ON. The input variables used for the two models are water pH, temperature, specific conductance, and sensor depth. The performances of the models are evaluated using root mean square errors (RMSE), mean absolute error (MAE), Willmott index of agreement (d) and correlation coefficient (CC) statistics. The lowest root mean square error and highest correlation coefficient values were obtained with the DENFIS-ON method. The results obtained with DENFIS models are compared with linear (multiple linear regression, MLR) and nonlinear (multi-layer perceptron neural networks, MLPNN) methods. This study demonstrates that DENFIS-ON investigated herein outperforms all the proposed techniques for DO modelling.
A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong
2001-01-01
This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.
Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng
2018-04-20
Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Dang, Tong; Zhang, Binzheng; Wiltberge, Michael; Wang, Wenbin; Varney, Roger; Dou, Xiankang; Wan, Weixing; Lei, Jiuhou
2018-01-01
In this study, the correlations between the fluxes of precipitating soft electrons in the cusp region and solar wind coupling functions are investigated utilizing the Lyon-Fedder-Mobarry global magnetosphere model simulations. We conduct two simulation runs during periods from 20 March 2008 to 16 April 2008 and from 15 to 24 December 2014, which are referred as "Equinox Case" and "Solstice Case," respectively. The simulation results of Equinox Case show that the plasma number density in the high-latitude cusp region scales well with the solar wind number density (ncusp/nsw=0.78), which agrees well with the statistical results from the Polar spacecraft measurements. For the Solstice Case, the plasma number density of high-latitude cusp in both hemispheres increases approximately linearly with upstream solar wind number density with prominent hemispheric asymmetry. Due to the dipole tilt effect, the average number density ratio ncusp/nsw in the Southern (summer) Hemisphere is nearly 3 times that in the Northern (winter) Hemisphere. In addition to the solar wind number density, 20 solar wind coupling functions are tested for the linear correlation with the fluxes of precipitating cusp soft electrons. The statistical results indicate that the solar wind dynamic pressure p exhibits the highest linear correlation with the cusp electron fluxes for both equinox and solstice conditions, with correlation coefficients greater than 0.75. The linear regression relations for equinox and solstice cases may provide an empirical calculation for the fluxes of cusp soft electron precipitation based on the upstream solar wind driving conditions.
Ahmadpanah, J; Ghavi Hossein-Zadeh, N; Shadparvar, A A; Pakdel, A
2017-02-01
1. The objectives of the current study were to investigate the effect of incidence rate (5%, 10%, 20%, 30% and 50%) of ascites syndrome on the expression of genetic characteristics for body weight at 5 weeks of age (BW5) and AS and to compare different methods of genetic parameter estimation for these traits. 2. Based on stochastic simulation, a population with discrete generations was created in which random mating was used for 10 generations. Two methods of restricted maximum likelihood and Bayesian approach via Gibbs sampling were used for the estimation of genetic parameters. A bivariate model including maternal effects was used. The root mean square error for direct heritabilities was also calculated. 3. The results showed that when incidence rates of ascites increased from 5% to 30%, the heritability of AS increased from 0.013 and 0.005 to 0.110 and 0.162 for linear and threshold models, respectively. 4. Maternal effects were significant for both BW5 and AS. Genetic correlations were decreased by increasing incidence rates of ascites in the population from 0.678 and 0.587 at 5% level of ascites to 0.393 and -0.260 at 50% occurrence for linear and threshold models, respectively. 5. The RMSE of direct heritability from true values for BW5 was greater based on a linear-threshold model compared with the linear model of analysis (0.0092 vs. 0.0015). The RMSE of direct heritability from true values for AS was greater based on a linear-linear model (1.21 vs. 1.14). 6. In order to rank birds for ascites incidence, it is recommended to use a threshold model because it resulted in higher heritability estimates compared with the linear model and that BW5 could be one of the main components of selection goals.
A unified view on weakly correlated recurrent networks
Grytskyy, Dmytro; Tetzlaff, Tom; Diesmann, Markus; Helias, Moritz
2013-01-01
The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances in the spiking activity raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties of covariances and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire (LIF) model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models (LRM), including the Ornstein–Uhlenbeck process (OUP) as a special case. The distinction between both classes is the location of additive noise in the rate dynamics, which is located on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the situation with synaptic conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for the calculation of population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of LIF models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra. PMID:24151463
Linear analysis of a force reflective teleoperator
NASA Technical Reports Server (NTRS)
Biggers, Klaus B.; Jacobsen, Stephen C.; Davis, Clark C.
1989-01-01
Complex force reflective teleoperation systems are often very difficult to analyze due to the large number of components and control loops involved. One mode of a force reflective teleoperator is described. An analysis of the performance of the system based on a linear analysis of the general full order model is presented. Reduced order models are derived and correlated with the full order models. Basic effects of force feedback and position feedback are examined and the effects of time delays between the master and slave are studied. The results show that with symmetrical position-position control of teleoperators, a basic trade off must be made between the intersystem stiffness of the teleoperator, and the impedance felt by the operator in free space.
Xuan Chi; Barry Goodwin
2012-01-01
Spatial and temporal relationships among agricultural prices have been an important topic of applied research for many years. Such research is used to investigate the performance of markets and to examine linkages up and down the marketing chain. This research has empirically evaluated price linkages by using correlation and regression models and, later, linear and...
Protoplanetary disc `isochrones' and the evolution of discs in the M˙-Md plane
NASA Astrophysics Data System (ADS)
Lodato, Giuseppe; Scardoni, Chiara E.; Manara, Carlo F.; Testi, Leonardo
2017-12-01
In this paper, we compare simple viscous diffusion models for the disc evolution with the results of recent surveys of the properties of young protoplanetary discs. We introduce the useful concept of 'disc isochrones' in the accretion rate-disc mass plane and explore a set of Monte Carlo realization of disc initial conditions. We find that such simple viscous models can provide a remarkable agreement with the available data in the Lupus star forming region, with the key requirement that the average viscous evolutionary time-scale of the discs is comparable to the cluster age. Our models produce naturally a correlation between mass accretion rate and disc mass that is shallower than linear, contrary to previous results and in agreement with observations. We also predict that a linear correlation, with a tighter scatter, should be found for more evolved disc populations. Finally, we find that such viscous models can reproduce the observations in the Lupus region only in the assumption that the efficiency of angular momentum transport is a growing function of radius, thus putting interesting constraints on the nature of the microscopic processes that lead to disc accretion.
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
2016-05-01
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
No Evidence for Activity Correlations in the Radial Velocities of Kapteyn’s Star
NASA Astrophysics Data System (ADS)
Anglada-Escudé, G.; Tuomi, M.; Arriagada, P.; Zechmeister, M.; Jenkins, J. S.; Ofir, A.; Dreizler, S.; Gerlach, E.; Marvin, C. J.; Reiners, A.; Jeffers, S. V.; Butler, R. Paul; Vogt, S. S.; Amado, P. J.; Rodríguez-López, C.; Berdiñas, Z. M.; Morin, J.; Crane, J. D.; Shectman, S. A.; Díaz, M. R.; Sarmiento, L. F.; Jones, H. R. A.
2016-10-01
Stellar activity may induce Doppler variability at the level of a few m s-1 which can then be confused by the Doppler signal of an exoplanet orbiting the star. To first order, linear correlations between radial velocity measurements and activity indices have been proposed to account for any such correlation. The likely presence of two super-Earths orbiting Kapteyn’s star was reported in Anglada-Escudé et al., but this claim was recently challenged by Robertson et al., who argued for evidence of a rotation period (143 days) at three times the orbital period of one of the proposed planets (Kapteyn’s b, P = 48.6 days) and the existence of strong linear correlations between its Doppler signal and activity data. By re-analyzing the data using global statistics and model comparison, we show that such a claim is incorrect given that (1) the choice of a rotation period at 143 days is unjustified, and (2) the presence of linear correlations is not supported by the data. We conclude that the radial velocity signals of Kapteyn’s star remain more simply explained by the presence of two super-Earth candidates orbiting it. We note that analysis of time series of activity indices must be executed with the same care as Doppler time series. We also advocate for the use of global optimization procedures and objective arguments, instead of claims based on residual analyses which are prone to biases and incorrect interpretations.
Ma, Jing; Yu, Jiong; Hao, Guangshu; Wang, Dan; Sun, Yanni; Lu, Jianxin; Cao, Hongcui; Lin, Feiyan
2017-02-20
The prevalence of high hyperlipemia is increasing around the world. Our aims are to analyze the relationship of triglyceride (TG) and cholesterol (TC) with indexes of liver function and kidney function, and to develop a prediction model of TG, TC in overweight people. A total of 302 adult healthy subjects and 273 overweight subjects were enrolled in this study. The levels of fasting indexes of TG (fs-TG), TC (fs-TC), blood glucose, liver function, and kidney function were measured and analyzed by correlation analysis and multiple linear regression (MRL). The back propagation artificial neural network (BP-ANN) was applied to develop prediction models of fs-TG and fs-TC. The results showed there was significant difference in biochemical indexes between healthy people and overweight people. The correlation analysis showed fs-TG was related to weight, height, blood glucose, and indexes of liver and kidney function; while fs-TC was correlated with age, indexes of liver function (P < 0.01). The MRL analysis indicated regression equations of fs-TG and fs-TC both had statistic significant (P < 0.01) when included independent indexes. The BP-ANN model of fs-TG reached training goal at 59 epoch, while fs-TC model achieved high prediction accuracy after training 1000 epoch. In conclusions, there was high relationship of fs-TG and fs-TC with weight, height, age, blood glucose, indexes of liver function and kidney function. Based on related variables, the indexes of fs-TG and fs-TC can be predicted by BP-ANN models in overweight people.
Yu, Qingzhao; Li, Bin; Scribner, Richard Allen
2009-06-30
Previous studies have suggested a link between alcohol outlets and assaults. In this paper, we explore the effects of alcohol availability on assaults at the census tract level over time. In addition, we use a natural experiment to check whether a sudden loss of alcohol outlets is associated with deeper decreasing in assault violence. Several features of the data raise statistical challenges: (1) the association between covariates (for example, the alcohol outlet density of each census tract) and the assault rates may be complex and therefore cannot be described using a linear model without covariates transformation, (2) the covariates may be highly correlated with each other, (3) there are a number of observations that have missing inputs, and (4) there is spatial association in assault rates at the census tract level. We propose a hierarchical additive model, where the nonlinear correlations and the complex interaction effects are modeled using the multiple additive regression trees and the residual spatial association in the assault rates that cannot be explained in the model are smoothed using a conditional autoregressive (CAR) method. We develop a two-stage algorithm that connects the nonparametric trees with CAR to look for important covariates associated with the assault rates, while taking into account the spatial association of assault rates in adjacent census tracts. The proposed method is applied to the Los Angeles assault data (1990-1999). To assess the efficiency of the method, the results are compared with those obtained from a hierarchical linear model. Copyright (c) 2009 John Wiley & Sons, Ltd.
Normal reference values for bladder wall thickness on CT in a healthy population.
Fananapazir, Ghaneh; Kitich, Aleksandar; Lamba, Ramit; Stewart, Susan L; Corwin, Michael T
2018-02-01
To determine normal bladder wall thickness on CT in patients without bladder disease. Four hundred and nineteen patients presenting for trauma with normal CTs of the abdomen and pelvis were included in our retrospective study. Bladder wall thickness was assessed, and bladder volume was measured using both the ellipsoid formula and an automated technique. Patient age, gender, and body mass index were recorded. Linear regression models were created to account for bladder volume, age, gender, and body mass index, and the multiple correlation coefficient with bladder wall thickness was computed. Bladder volume and bladder wall thickness were log-transformed to achieve approximate normality and homogeneity of variance. Variables that did not contribute substantively to the model were excluded, and a parsimonious model was created and the multiple correlation coefficient was calculated. Expected bladder wall thickness was estimated for different bladder volumes, and 1.96 standard deviation above expected provided the upper limit of normal on the log scale. Age, gender, and bladder volume were associated with bladder wall thickness (p = 0.049, 0.024, and < 0.001, respectively). The linear regression model had an R 2 of 0.52. Age and gender were negligible in contribution to the model, and a parsimonious model using only volume was created for both the ellipsoid and automated volumes (R 2 = 0.52 and 0.51, respectively). Bladder wall thickness correlates with bladder wall volume. The study provides reference bladder wall thicknesses on CT utilizing both the ellipsoid formula and automated bladder volumes.
MEASUREMENT OF WIND SPEED FROM COOLING LAKE THERMAL IMAGERY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, A; Robert Kurzeja, R; Eliel Villa-Aleman, E
2009-01-20
The Savannah River National Laboratory (SRNL) collected thermal imagery and ground truth data at two commercial power plant cooling lakes to investigate the applicability of laboratory empirical correlations between surface heat flux and wind speed, and statistics derived from thermal imagery. SRNL demonstrated in a previous paper [1] that a linear relationship exists between the standard deviation of image temperature and surface heat flux. In this paper, SRNL will show that the skewness of the temperature distribution derived from cooling lake thermal images correlates with instantaneous wind speed measured at the same location. SRNL collected thermal imagery, surface meteorology andmore » water temperatures from helicopters and boats at the Comanche Peak and H. B. Robinson nuclear power plant cooling lakes. SRNL found that decreasing skewness correlated with increasing wind speed, as was the case for the laboratory experiments. Simple linear and orthogonal regression models both explained about 50% of the variance in the skewness - wind speed plots. A nonlinear (logistic) regression model produced a better fit to the data, apparently because the thermal convection and resulting skewness are related to wind speed in a highly nonlinear way in nearly calm and in windy conditions.« less
Ruan, Xiaofang; Zhang, Ruisheng; Yao, Xiaojun; Liu, Mancang; Fan, Botao
2007-03-01
Alkylphenols are a group of permanent pollutants in the environment and could adversely disturb the human endocrine system. It is therefore important to effectively separate and measure the alkylphenols. To guide the chromatographic analysis of these compounds in practice, the development of quantitative relationship between the molecular structure and the retention time of alkylphenols becomes necessary. In this study, topological, constitutional, geometrical, electrostatic and quantum-chemical descriptors of 44 alkylphenols were calculated using a software, CODESSA, and these descriptors were pre-selected using the heuristic method. As a result, three-descriptor linear model (LM) was developed to describe the relationship between the molecular structure and the retention time of alkylphenols. Meanwhile, the non-linear regression model was also developed based on support vector machine (SVM) using the same three descriptors. The correlation coefficient (R(2)) for the LM and SVM was 0.98 and 0. 92, and the corresponding root-mean-square error was 0. 99 and 2. 77, respectively. By comparing the stability and prediction ability of the two models, it was found that the linear model was a better method for describing the quantitative relationship between the retention time of alkylphenols and the molecular structure. The results obtained suggested that the linear model could be applied for the chromatographic analysis of alkylphenols with known molecular structural parameters.
Modeling the pressure-strain correlation of turbulence: An invariant dynamical systems approach
NASA Technical Reports Server (NTRS)
Speziale, Charles G.; Sarkar, Sutanu; Gatski, Thomas B.
1990-01-01
The modeling of the pressure-strain correlation of turbulence is examined from a basic theoretical standpoint with a view toward developing improved second-order closure models. Invariance considerations along with elementary dynamical systems theory are used in the analysis of the standard hierarchy of closure models. In these commonly used models, the pressure-strain correlation is assumed to be a linear function of the mean velocity gradients with coefficients that depend algebraically on the anisotropy tensor. It is proven that for plane homogeneous turbulent flows the equilibrium structure of this hierarchy of models is encapsulated by a relatively simple model which is only quadratically nonlinear in the anisotropy tensor. This new quadratic model - the SSG model - is shown to outperform the Launder, Reece, and Rodi model (as well as more recent models that have a considerably more complex nonlinear structure) in a variety of homogeneous turbulent flows. Some deficiencies still remain for the description of rotating turbulent shear flows that are intrinsic to this general hierarchy of models and, hence, cannot be overcome by the mere introduction of more complex nonlinearities. It is thus argued that the recent trend of adding substantially more complex nonlinear terms containing the anisotropy tensor may be of questionable value in the modeling of the pressure-strain correlation. Possible alternative approaches are discussed briefly.
Modelling the pressure-strain correlation of turbulence - An invariant dynamical systems approach
NASA Technical Reports Server (NTRS)
Speziale, Charles G.; Sarkar, Sutanu; Gatski, Thomas B.
1991-01-01
The modeling of the pressure-strain correlation of turbulence is examined from a basic theoretical standpoint with a view toward developing improved second-order closure models. Invariance considerations along with elementary dynamical systems theory are used in the analysis of the standard hierarchy of closure models. In these commonly used models, the pressure-strain correlation is assumed to be a linear function of the mean velocity gradients with coefficients that depend algebraically on the anisotropy tensor. It is proven that for plane homogeneous turbulent flows the equilibrium structure of this hierarchy of models is encapsulated by a relatively simple model which is only quadratically nonlinear in the anisotropy tensor. This new quadratic model - the SSG model - is shown to outperform the Launder, Reece, and Rodi model (as well as more recent models that have a considerably more complex nonlinear structure) in a variety of homogeneous turbulent flows. Some deficiencies still remain for the description of rotating turbulent shear flows that are intrinsic to this general hierarchy of models and, hence, cannot be overcome by the mere introduction of more complex nonlinearities. It is thus argued that the recent trend of adding substantially more complex nonlinear terms containing the anisotropy tensor may be of questionable value in the modeling of the pressure-strain correlation. Possible alternative approaches are discussed briefly.
Watanabe, Hiroyuki; Miyazaki, Hiroyasu
2006-01-01
Over- and/or under-correction of QT intervals for changes in heart rate may lead to misleading conclusions and/or masking the potential of a drug to prolong the QT interval. This study examines a nonparametric regression model (Loess Smoother) to adjust the QT interval for differences in heart rate, with an improved fitness over a wide range of heart rates. 240 sets of (QT, RR) observations collected from each of 8 conscious and non-treated beagle dogs were used as the materials for investigation. The fitness of the nonparametric regression model to the QT-RR relationship was compared with four models (individual linear regression, common linear regression, and Bazett's and Fridericia's correlation models) with reference to Akaike's Information Criterion (AIC). Residuals were visually assessed. The bias-corrected AIC of the nonparametric regression model was the best of the models examined in this study. Although the parametric models did not fit, the nonparametric regression model improved the fitting at both fast and slow heart rates. The nonparametric regression model is the more flexible method compared with the parametric method. The mathematical fit for linear regression models was unsatisfactory at both fast and slow heart rates, while the nonparametric regression model showed significant improvement at all heart rates in beagle dogs.
A perturbative approach to the redshift space correlation function: beyond the Standard Model
NASA Astrophysics Data System (ADS)
Bose, Benjamin; Koyama, Kazuya
2017-08-01
We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model which is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with <= 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpch <= s <= 180Mpc/h. Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.
Linearized spectrum correlation analysis for line emission measurements
NASA Astrophysics Data System (ADS)
Nishizawa, T.; Nornberg, M. D.; Den Hartog, D. J.; Sarff, J. S.
2017-08-01
A new spectral analysis method, Linearized Spectrum Correlation Analysis (LSCA), for charge exchange and passive ion Doppler spectroscopy is introduced to provide a means of measuring fast spectral line shape changes associated with ion-scale micro-instabilities. This analysis method is designed to resolve the fluctuations in the emission line shape from a stationary ion-scale wave. The method linearizes the fluctuations around a time-averaged line shape (e.g., Gaussian) and subdivides the spectral output channels into two sets to reduce contributions from uncorrelated fluctuations without averaging over the fast time dynamics. In principle, small fluctuations in the parameters used for a line shape model can be measured by evaluating the cross spectrum between different channel groupings to isolate a particular fluctuating quantity. High-frequency ion velocity measurements (100-200 kHz) were made by using this method. We also conducted simulations to compare LSCA with a moment analysis technique under a low photon count condition. Both experimental and synthetic measurements demonstrate the effectiveness of LSCA.
The Gaussian streaming model and convolution Lagrangian effective field theory
Vlah, Zvonimir; Castorina, Emanuele; White, Martin
2016-12-05
We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM tomore » a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.« less
Pre-slaughter rectal temperature as an indicator of pork meat quality.
Vermeulen, L; Van de Perre, V; Permentier, L; De Bie, S; Geers, R
2015-07-01
This study investigates whether rectal temperature of pigs, prior to slaughter, can give an indication of the risk of developing pork with PSE characteristics. A total of 1203 pigs were examined, measuring the rectal temperature just before stunning, of which 794 rectal temperatures were measured immediately after stunning. pH30LT (M. Longissimus thoracis) and temperature of the ham (Temp30Ham) were collected from about 530 carcasses, 30 min after sticking. The results present a significant positive linear correlation between rectal temperature just before and after slaughter, and Temp30Ham. Moreover, pH30LT is negatively correlated with rectal temperature and Temp30Ham. Finally, a linear mixed model for pH30LT was established with the rectal temperature of the pigs just before stunning and the lairage time. This model defines that measuring rectal temperature of pigs just before slaughter allows discovery of pork with PSE traits, taking into account pre-slaughter conditions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analysis and generation of groundwater concentration time series
NASA Astrophysics Data System (ADS)
Crăciun, Maria; Vamoş, Călin; Suciu, Nicolae
2018-01-01
Concentration time series are provided by simulated concentrations of a nonreactive solute transported in groundwater, integrated over the transverse direction of a two-dimensional computational domain and recorded at the plume center of mass. The analysis of a statistical ensemble of time series reveals subtle features that are not captured by the first two moments which characterize the approximate Gaussian distribution of the two-dimensional concentration fields. The concentration time series exhibit a complex preasymptotic behavior driven by a nonstationary trend and correlated fluctuations with time-variable amplitude. Time series with almost the same statistics are generated by successively adding to a time-dependent trend a sum of linear regression terms, accounting for correlations between fluctuations around the trend and their increments in time, and terms of an amplitude modulated autoregressive noise of order one with time-varying parameter. The algorithm generalizes mixing models used in probability density function approaches. The well-known interaction by exchange with the mean mixing model is a special case consisting of a linear regression with constant coefficients.
Recall of past use of mobile phone handsets.
Parslow, R C; Hepworth, S J; McKinney, P A
2003-01-01
Previous studies investigating health effects of mobile phones have based their estimation of exposure on self-reported levels of phone use. This UK validation study assesses the accuracy of reported voice calls made from mobile handsets. Data collected by postal questionnaire from 93 volunteers was compared to records obtained prospectively over 6 months from four network operators. Agreement was measured for outgoing calls using the kappa statistic, log-linear modelling, Spearman correlation coefficient and graphical methods. Agreement for number of calls gained moderate classification (kappa = 0.39) with better agreement for duration (kappa = 0.50). Log-linear modelling produced similar results. The Spearman correlation coefficient was 0.48 for number of calls and 0.60 for duration. Graphical agreement methods demonstrated patterns of over-reporting call numbers (by a factor of 1.7) and duration (by a factor of 2.8). These results suggest that self-reported mobile phone use may not fully represent patterns of actual use. This has implications for calculating exposures from questionnaire data.
Age constraints on the evolution of the Quetico belt, Superior Province, Ontario
NASA Technical Reports Server (NTRS)
Percival, J. A.; Sullivan, R. W.
1986-01-01
Much attention has been focused on the nature of Archean tectonic processes and the extent to which they were different from modern rigid-plate tectonics. The Archean Superior Province has linear metavolcanic and metasediment-dominated subprovinces of similar scale to cenozoic island arc-trench systems of the western Pacific, suggesting an origin by accreting arcs. Models of the evolution of metavolcanic belts in parts of the Superior Province suggest an arc setting but the tectonic environment and evolution of the intervening metasedimentary belts are poorly understood. In addition to explaining the setting giving rise to a linear sedimentary basin, models must account for subsequent shortening and high-temperature, low-pressure metamorphism. Correlation of rock units and events in adjacent metavolcanic and metasedimentary belts is a first step toward understanding large-scale crustal interactions. To this end, zircon geochronology has been applied to metavolcanic belts of the western Superior Province; new age data for the Quetico metasedimentary belt is reported, permitting correlation with the adjacent Wabigoon and Wawa metavolcanic subprovinces.
The Gaussian streaming model and convolution Lagrangian effective field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlah, Zvonimir; Castorina, Emanuele; White, Martin, E-mail: zvlah@stanford.edu, E-mail: ecastorina@berkeley.edu, E-mail: mwhite@berkeley.edu
We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM tomore » a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.« less
Potluri, Chandrasekhar; Anugolu, Madhavi; Schoen, Marco P; Subbaram Naidu, D; Urfer, Alex; Chiu, Steve
2013-11-01
Estimating skeletal muscle (finger) forces using surface Electromyography (sEMG) signals poses many challenges. In general, the sEMG measurements are based on single sensor data. In this paper, two novel hybrid fusion techniques for estimating the skeletal muscle force from the sEMG array sensors are proposed. The sEMG signals are pre-processed using five different filters: Butterworth, Chebychev Type II, Exponential, Half-Gaussian and Wavelet transforms. Dynamic models are extracted from the acquired data using Nonlinear Wiener Hammerstein (NLWH) models and Spectral Analysis Frequency Dependent Resolution (SPAFDR) models based system identification techniques. A detailed comparison is provided for the proposed filters and models using 18 healthy subjects. Wavelet transforms give higher mean correlation of 72.6 ± 1.7 (mean ± SD) and 70.4 ± 1.5 (mean ± SD) for NLWH and SPAFDR models, respectively, when compared to the other filters used in this work. Experimental verification of the fusion based hybrid models with wavelet transform shows a 96% mean correlation and 3.9% mean relative error with a standard deviation of ± 1.3 and ± 0.9 respectively between the overall hybrid fusion algorithm estimated and the actual force for 18 test subjects' k-fold cross validation data. © 2013 Elsevier Ltd. All rights reserved.
On the linearity of tracer bias around voids
NASA Astrophysics Data System (ADS)
Pollina, Giorgia; Hamaus, Nico; Dolag, Klaus; Weller, Jochen; Baldi, Marco; Moscardini, Lauro
2017-07-01
The large-scale structure of the Universe can be observed only via luminous tracers of the dark matter. However, the clustering statistics of tracers are biased and depend on various properties, such as their host-halo mass and assembly history. On very large scales, this tracer bias results in a constant offset in the clustering amplitude, known as linear bias. Towards smaller non-linear scales, this is no longer the case and tracer bias becomes a complicated function of scale and time. We focus on tracer bias centred on cosmic voids, I.e. depressions of the density field that spatially dominate the Universe. We consider three types of tracers: galaxies, galaxy clusters and active galactic nuclei, extracted from the hydrodynamical simulation Magneticum Pathfinder. In contrast to common clustering statistics that focus on auto-correlations of tracers, we find that void-tracer cross-correlations are successfully described by a linear bias relation. The tracer-density profile of voids can thus be related to their matter-density profile by a single number. We show that it coincides with the linear tracer bias extracted from the large-scale auto-correlation function and expectations from theory, if sufficiently large voids are considered. For smaller voids we observe a shift towards higher values. This has important consequences on cosmological parameter inference, as the problem of unknown tracer bias is alleviated up to a constant number. The smallest scales in existing data sets become accessible to simpler models, providing numerous modes of the density field that have been disregarded so far, but may help to further reduce statistical errors in constraining cosmology.
Prediction of the sorption capacities and affinities of organic chemicals by XAD-7.
Yang, Kun; Qi, Long; Wei, Wei; Wu, Wenhao; Lin, Daohui
2016-01-01
Macro-porous resins are widely used as adsorbents for the treatment of organic contaminants in wastewater and for the pre-concentration of organic solutes from water. However, the sorption mechanisms for organic contaminants on such adsorbents have not been systematically investigated so far. Therefore, in this study, the sorption capacities and affinities of 24 organic chemicals by XAD-7 were investigated and the experimentally obtained sorption isotherms were fitted to the Dubinin-Ashtakhov model. Linear positive correlations were observed between the sorption capacities and the solubilities (SW) of the chemicals in water or octanol and between the sorption affinities and the solvatochromic parameters of the chemicals, indicating that the sorption of various organic compounds by XAD-7 occurred by non-linear partitioning into XAD-7, rather than by adsorption on XAD-7 surfaces. Both specific interactions (i.e., hydrogen-bonding interactions) as well as nonspecific interactions were considered to be responsible for the non-linear partitioning. The correlation equations obtained in this study allow the prediction of non-linear partitioning using well-known chemical parameters, namely SW, octanol-water partition coefficients (KOW), and the hydrogen-bonding donor parameter (αm). The effect of pH on the sorption of ionizable organic compounds (IOCs) could also be predicted by combining the correlation equations with additional equations developed from the estimation of IOC dissociation rates. The prediction equations developed in this study and the proposed non-linear partition mechanism shed new light on the selective removal and pre-concentration of organic solutes from water and on the regeneration of exhausted XAD-7 using solvent extraction.
Collinearity and Causal Diagrams: A Lesson on the Importance of Model Specification.
Schisterman, Enrique F; Perkins, Neil J; Mumford, Sunni L; Ahrens, Katherine A; Mitchell, Emily M
2017-01-01
Correlated data are ubiquitous in epidemiologic research, particularly in nutritional and environmental epidemiology where mixtures of factors are often studied. Our objectives are to demonstrate how highly correlated data arise in epidemiologic research and provide guidance, using a directed acyclic graph approach, on how to proceed analytically when faced with highly correlated data. We identified three fundamental structural scenarios in which high correlation between a given variable and the exposure can arise: intermediates, confounders, and colliders. For each of these scenarios, we evaluated the consequences of increasing correlation between the given variable and the exposure on the bias and variance for the total effect of the exposure on the outcome using unadjusted and adjusted models. We derived closed-form solutions for continuous outcomes using linear regression and empirically present our findings for binary outcomes using logistic regression. For models properly specified, total effect estimates remained unbiased even when there was almost perfect correlation between the exposure and a given intermediate, confounder, or collider. In general, as the correlation increased, the variance of the parameter estimate for the exposure in the adjusted models increased, while in the unadjusted models, the variance increased to a lesser extent or decreased. Our findings highlight the importance of considering the causal framework under study when specifying regression models. Strategies that do not take into consideration the causal structure may lead to biased effect estimation for the original question of interest, even under high correlation.
Pairing phase diagram of three holes in the generalized Hubbard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, O.; Espinosa, J.E.
Investigations of high-{Tc} superconductors suggest that the electronic correlation may play a significant role in the formation of pairs. Although the main interest is on the physic of two-dimensional highly correlated electron systems, the one-dimensional models related to high temperature superconductivity are very popular due to the conjecture that properties of the 1D and 2D variants of certain models have common aspects. Within the models for correlated electron systems, that attempt to capture the essential physics of high-temperature superconductors and parent compounds, the Hubbard model is one of the simplest. Here, the pairing problem of a three electrons system hasmore » been studied by using a real-space method and the generalized Hubbard Hamiltonian. This method includes the correlated hopping interactions as an extension of the previously proposed mapping method, and is based on mapping the correlated many body problem onto an equivalent site- and bond-impurity tight-binding one in a higher dimensional space, where the problem was solved in a non-perturbative way. In a linear chain, the authors analyzed the pairing phase diagram of three correlated holes for different values of the Hamiltonian parameters. For some value of the hopping parameters they obtain an analytical solution for all kind of interactions.« less
Variable screening via quantile partial correlation
Ma, Shujie; Tsai, Chih-Ling
2016-01-01
In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683
Non-local Second Order Closure Scheme for Boundary Layer Turbulence and Convection
NASA Astrophysics Data System (ADS)
Meyer, Bettina; Schneider, Tapio
2017-04-01
There has been scientific consensus that the uncertainty in the cloud feedback remains the largest source of uncertainty in the prediction of climate parameters like climate sensitivity. To narrow down this uncertainty, not only a better physical understanding of cloud and boundary layer processes is required, but specifically the representation of boundary layer processes in models has to be improved. General climate models use separate parameterisation schemes to model the different boundary layer processes like small-scale turbulence, shallow and deep convection. Small scale turbulence is usually modelled by local diffusive parameterisation schemes, which truncate the hierarchy of moment equations at first order and use second-order equations only to estimate closure parameters. In contrast, the representation of convection requires higher order statistical moments to capture their more complex structure, such as narrow updrafts in a quasi-steady environment. Truncations of moment equations at second order may lead to more accurate parameterizations. At the same time, they offer an opportunity to take spatially correlated structures (e.g., plumes) into account, which are known to be important for convective dynamics. In this project, we study the potential and limits of local and non-local second order closure schemes. A truncation of the momentum equations at second order represents the same dynamics as a quasi-linear version of the equations of motion. We study the three-dimensional quasi-linear dynamics in dry and moist convection by implementing it in a LES model (PyCLES) and compare it to a fully non-linear LES. In the quasi-linear LES, interactions among turbulent eddies are suppressed but nonlinear eddy—mean flow interactions are retained, as they are in the second order closure. In physical terms, suppressing eddy—eddy interactions amounts to suppressing, e.g., interactions among convective plumes, while retaining interactions between plumes and the environment (e.g., entrainment and detrainment). In a second part, we employ the possibility to include non-local statistical correlations in a second-order closure scheme. Such non-local correlations allow to directly incorporate the spatially coherent structures that occur in the form of convective updrafts penetrating the boundary layer. This allows us to extend the work that has been done using assumed-PDF schemes for parameterising boundary layer turbulence and shallow convection in a non-local sense.
NASA Astrophysics Data System (ADS)
Malpetti, Daniele; Roscilde, Tommaso
2017-02-01
The mean-field approximation is at the heart of our understanding of complex systems, despite its fundamental limitation of completely neglecting correlations between the elementary constituents. In a recent work [Phys. Rev. Lett. 117, 130401 (2016), 10.1103/PhysRevLett.117.130401], we have shown that in quantum many-body systems at finite temperature, two-point correlations can be formally separated into a thermal part and a quantum part and that quantum correlations are generically found to decay exponentially at finite temperature, with a characteristic, temperature-dependent quantum coherence length. The existence of these two different forms of correlation in quantum many-body systems suggests the possibility of formulating an approximation, which affects quantum correlations only, without preventing the correct description of classical fluctuations at all length scales. Focusing on lattice boson and quantum Ising models, we make use of the path-integral formulation of quantum statistical mechanics to introduce such an approximation, which we dub quantum mean-field (QMF) approach, and which can be readily generalized to a cluster form (cluster QMF or cQMF). The cQMF approximation reduces to cluster mean-field theory at T =0 , while at any finite temperature it produces a family of systematically improved, semi-classical approximations to the quantum statistical mechanics of the lattice theory at hand. Contrary to standard MF approximations, the correct nature of thermal critical phenomena is captured by any cluster size. In the two exemplary cases of the two-dimensional quantum Ising model and of two-dimensional quantum rotors, we study systematically the convergence of the cQMF approximation towards the exact result, and show that the convergence is typically linear or sublinear in the boundary-to-bulk ratio of the clusters as T →0 , while it becomes faster than linear as T grows. These results pave the way towards the development of semiclassical numerical approaches based on an approximate, yet systematically improved account of quantum correlations.
NASA Astrophysics Data System (ADS)
Yamada, Hiroki; Fukui, Takahiro
2004-02-01
We study Anderson localization of non-interacting random hopping fermions on bipartite lattices in two dimensions, focusing our attention to strong disorder features of the model. We concentrate ourselves on specific models with a linear dispersion in the vicinity of the band center, which can be described by a Dirac fermion in the continuum limit. Based on the recent renormalization group method developed by Carpentier and Le Doussal for the XY gauge glass model, we calculate the density of states, inverse participation ratios, and their spatial correlations. It turns out that their behavior is quite different from those expected within naive weak disorder approaches.
NASA Astrophysics Data System (ADS)
Tiwari, Vivek; Peters, William K.; Jonas, David M.
2017-10-01
Non-adiabatic vibrational-electronic resonance in the excited electronic states of natural photosynthetic antennas drastically alters the adiabatic framework, in which electronic energy transfer has been conventionally studied, and suggests the possibility of exploiting non-adiabatic dynamics for directed energy transfer. Here, a generalized dimer model incorporates asymmetries between pigments, coupling to the environment, and the doubly excited state relevant for nonlinear spectroscopy. For this generalized dimer model, the vibrational tuning vector that drives energy transfer is derived and connected to decoherence between singly excited states. A correlation vector is connected to decoherence between the ground state and the doubly excited state. Optical decoherence between the ground and singly excited states involves linear combinations of the correlation and tuning vectors. Excitonic coupling modifies the tuning vector. The correlation and tuning vectors are not always orthogonal, and both can be asymmetric under pigment exchange, which affects energy transfer. For equal pigment vibrational frequencies, the nonadiabatic tuning vector becomes an anti-correlated delocalized linear combination of intramolecular vibrations of the two pigments, and the nonadiabatic energy transfer dynamics become separable. With exchange symmetry, the correlation and tuning vectors become delocalized intramolecular vibrations that are symmetric and antisymmetric under pigment exchange. Diabatic criteria for vibrational-excitonic resonance demonstrate that anti-correlated vibrations increase the range and speed of vibronically resonant energy transfer (the Golden Rule rate is a factor of 2 faster). A partial trace analysis shows that vibronic decoherence for a vibrational-excitonic resonance between two excitons is slower than their purely excitonic decoherence.
Tiwari, Vivek; Peters, William K; Jonas, David M
2017-10-21
Non-adiabatic vibrational-electronic resonance in the excited electronic states of natural photosynthetic antennas drastically alters the adiabatic framework, in which electronic energy transfer has been conventionally studied, and suggests the possibility of exploiting non-adiabatic dynamics for directed energy transfer. Here, a generalized dimer model incorporates asymmetries between pigments, coupling to the environment, and the doubly excited state relevant for nonlinear spectroscopy. For this generalized dimer model, the vibrational tuning vector that drives energy transfer is derived and connected to decoherence between singly excited states. A correlation vector is connected to decoherence between the ground state and the doubly excited state. Optical decoherence between the ground and singly excited states involves linear combinations of the correlation and tuning vectors. Excitonic coupling modifies the tuning vector. The correlation and tuning vectors are not always orthogonal, and both can be asymmetric under pigment exchange, which affects energy transfer. For equal pigment vibrational frequencies, the nonadiabatic tuning vector becomes an anti-correlated delocalized linear combination of intramolecular vibrations of the two pigments, and the nonadiabatic energy transfer dynamics become separable. With exchange symmetry, the correlation and tuning vectors become delocalized intramolecular vibrations that are symmetric and antisymmetric under pigment exchange. Diabatic criteria for vibrational-excitonic resonance demonstrate that anti-correlated vibrations increase the range and speed of vibronically resonant energy transfer (the Golden Rule rate is a factor of 2 faster). A partial trace analysis shows that vibronic decoherence for a vibrational-excitonic resonance between two excitons is slower than their purely excitonic decoherence.
Redshift-space distortions with the halo occupation distribution - II. Analytic model
NASA Astrophysics Data System (ADS)
Tinker, Jeremy L.
2007-01-01
We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.
New Models for Velocity/Pressure-Gradient Correlations in Turbulent Boundary Layers
NASA Astrophysics Data System (ADS)
Poroseva, Svetlana; Murman, Scott
2014-11-01
To improve the performance of Reynolds-Averaged Navier-Stokes (RANS) turbulence models, one has to improve the accuracy of models for three physical processes: turbulent diffusion, interaction of turbulent pressure and velocity fluctuation fields, and dissipative processes. The accuracy of modeling the turbulent diffusion depends on the order of a statistical closure chosen as a basis for a RANS model. When the Gram-Charlier series expansions for the velocity correlations are used to close the set of RANS equations, no assumption on Gaussian turbulence is invoked and no unknown model coefficients are introduced into the modeled equations. In such a way, this closure procedure reduces the modeling uncertainty of fourth-order RANS (FORANS) closures. Experimental and direct numerical simulation data confirmed the validity of using the Gram-Charlier series expansions in various flows including boundary layers. We will address modeling the velocity/pressure-gradient correlations. New linear models will be introduced for the second- and higher-order correlations applicable to two-dimensional incompressible wall-bounded flows. Results of models' validation with DNS data in a channel flow and in a zero-pressure gradient boundary layer over a flat plate will be demonstrated. A part of the material is based upon work supported by NASA under award NNX12AJ61A.
Algebraic approach to electronic spectroscopy and dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toutounji, Mohamad
Lie algebra, Zassenhaus, and parameter differentiation techniques are utilized to break up the exponential of a bilinear Hamiltonian operator into a product of noncommuting exponential operators by the virtue of the theory of Wei and Norman [J. Math. Phys. 4, 575 (1963); Proc. Am. Math. Soc., 15, 327 (1964)]. There are about three different ways to find the Zassenhaus exponents, namely, binomial expansion, Suzuki formula, and q-exponential transformation. A fourth, and most reliable method, is provided. Since linearly displaced and distorted (curvature change upon excitation/emission) Hamiltonian and spin-boson Hamiltonian may be classified as bilinear Hamiltonians, the presented algebraic algorithm (exponentialmore » operator disentanglement exploiting six-dimensional Lie algebra case) should be useful in spin-boson problems. The linearly displaced and distorted Hamiltonian exponential is only treated here. While the spin-boson model is used here only as a demonstration of the idea, the herein approach is more general and powerful than the specific example treated. The optical linear dipole moment correlation function is algebraically derived using the above mentioned methods and coherent states. Coherent states are eigenvectors of the bosonic lowering operator a and not of the raising operator a{sup +}. While exp(a{sup +}) translates coherent states, exp(a{sup +}a{sup +}) operation on coherent states has always been a challenge, as a{sup +} has no eigenvectors. Three approaches, and the results, of that operation are provided. Linear absorption spectra are derived, calculated, and discussed. The linear dipole moment correlation function for the pure quadratic coupling case is expressed in terms of Legendre polynomials to better show the even vibronic transitions in the absorption spectrum. Comparison of the present line shapes to those calculated by other methods is provided. Franck-Condon factors for both linear and quadratic couplings are exactly accounted for by the herein calculated linear absorption spectra. This new methodology should easily pave the way to calculating the four-point correlation function, F({tau}{sub 1},{tau}{sub 2},{tau}{sub 3},{tau}{sub 4}), of which the optical nonlinear response function may be procured, as evaluating F({tau}{sub 1},{tau}{sub 2},{tau}{sub 3},{tau}{sub 4}) is only evaluating the optical linear dipole moment correlation function iteratively over different time intervals, which should allow calculating various optical nonlinear temporal/spectral signals.« less
An analysis of scatter decomposition
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1990-01-01
A formal analysis of a powerful mapping technique known as scatter decomposition is presented. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter decomposition works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.
Wang, Taofeng; Li, Guangwu; Zhu, Liping; ...
2016-01-08
The dependence of correlations of neutron multiplicity ν and γ-ray multiplicity M γ in spontaneous fission of 252Cf on fragment mass A* and total kinetic energy (TKE) have been investigated by employing the ratio of M γ/ν and the form of M γ(ν). We show for the first time that M γ and ν have a complex correlation for heavy fragment masses, while there is a positive dependence of Mγ for light fragment masses and for near-symmetric mass splits. The ratio M γ/ν exhibits strong shell effects for neutron magic number N=50 and near doubly magic number shell closure atmore » Z=50 and N=82. The γ-ray multiplicity Mγ has a maximum for TKE=165-170 MeV. Above 170 MeV M γ(TKE) is approximately linear, while it deviates significantly from a linear dependence at lower TKE. The correlation between the average neutron and γ-ray multiplicities can be partly reproduced by model calculations.« less
Order-constrained linear optimization.
Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P
2017-11-01
Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Bischoff, Jan-Moritz; Jeckelmann, Eric
2017-11-01
We improve the density-matrix renormalization group (DMRG) evaluation of the Kubo formula for the zero-temperature linear conductance of one-dimensional correlated systems. The dynamical DMRG is used to compute the linear response of a finite system to an applied ac source-drain voltage; then the low-frequency finite-system response is extrapolated to the thermodynamic limit to obtain the dc conductance of an infinite system. The method is demonstrated on the one-dimensional spinless fermion model at half filling. Our method is able to replicate several predictions of the Luttinger liquid theory such as the renormalization of the conductance in a homogeneous conductor, the universal effects of a single barrier, and the resonant tunneling through a double barrier.
NASA Astrophysics Data System (ADS)
Reinert, K. A.
The use of linear decision rules (LDR) and chance constrained programming (CCP) to optimize the performance of wind energy conversion clusters coupled to storage systems is described. Storage is modelled by LDR and output by CCP. The linear allocation rule and linear release rule prescribe the size and optimize a storage facility with a bypass. Chance constraints are introduced to explicitly treat reliability in terms of an appropriate value from an inverse cumulative distribution function. Details of deterministic programming structure and a sample problem involving a 500 kW and a 1.5 MW WECS are provided, considering an installed cost of $1/kW. Four demand patterns and three levels of reliability are analyzed for optimizing the generator choice and the storage configuration for base load and peak operating conditions. Deficiencies in ability to predict reliability and to account for serial correlations are noted in the model, which is concluded useful for narrowing WECS design options.
Reduced-Drift Virtual Gyro from an Array of Low-Cost Gyros.
Vaccaro, Richard J; Zaki, Ahmed S
2017-02-11
A Kalman filter approach for combining the outputs of an array of high-drift gyros to obtain a virtual lower-drift gyro has been known in the literature for more than a decade. The success of this approach depends on the correlations of the random drift components of the individual gyros. However, no method of estimating these correlations has appeared in the literature. This paper presents an algorithm for obtaining the statistical model for an array of gyros, including the cross-correlations of the individual random drift components. In order to obtain this model, a new statistic, called the "Allan covariance" between two gyros, is introduced. The gyro array model can be used to obtain the Kalman filter-based (KFB) virtual gyro. Instead, we consider a virtual gyro obtained by taking a linear combination of individual gyro outputs. The gyro array model is used to calculate the optimal coefficients, as well as to derive a formula for the drift of the resulting virtual gyro. The drift formula for the optimal linear combination (OLC) virtual gyro is identical to that previously derived for the KFB virtual gyro. Thus, a Kalman filter is not necessary to obtain a minimum drift virtual gyro. The theoretical results of this paper are demonstrated using simulated as well as experimental data. In experimental results with a 28-gyro array, the OLC virtual gyro has a drift spectral density 40 times smaller than that obtained by taking the average of the gyro signals.
NASA Astrophysics Data System (ADS)
Ma, J.; Xiao, X.; Zhang, Y.; Chen, B.; Zhao, B.
2017-12-01
Great significance exists in accurately estimating spatial-temporal patterns of gross primary production (GPP) because of its important role in global carbon cycle. Satellite-based light use efficiency (LUE) models are regarded as an efficient tool in simulating spatially time-sires GPP. However, the estimation of the accuracy of GPP simulations from LUE at both spatial and temporal scales is still a challenging work. In this study, we simulated GPP of vegetation in China during 2007-2014 using a LUE model (Vegetation Photosynthesis Model, VPM) based on MODIS (moderate-resolution imaging spectroradiometer) images of 8-day temporal and 500-m spatial resolutions and NCEP (National Center for Environmental Prediction) climate data. Global Ozone Monitoring Instrument 2 (GOME-2) solar-induced chlorophyll fluorescence (SIF) data were used to compare with VPM simulated GPP (GPPVPM) temporally and spatially using linear correlation analysis. Significant positive linear correlations exist between monthly GPPVPM and SIF data over both single year (2010) and multiple years (2007-2014) in China. Annual GPPVPM is significantly positive correlated with SIF (R2>0.43) spatially for all years during 2007-2014 and all seasons in 2010 (R2>0.37). GPP dynamic trends is high spatial-temporal heterogeneous in China during 2007-2014. The results of this study indicate that GPPVPM is temporally and spatially in line with SIF data, and space-borne SIF data have great potential in validating and parameterizing GPP estimation of LUE-based models.
NASA Astrophysics Data System (ADS)
Liu, Y.; Meng, X.; Guo, Z.; Zhang, C.; Nguyen, T. H.; Hu, D.; Ji, J.; Yang, X.
2017-12-01
Colloidal attachment on charge heterogeneous grains has significant environmental implications for transport of hazardous colloids, such as pathogens, in the aquifer, where iron, manganese, and aluminium oxide minerals are the major source of surface charge heterogeneity of the aquifer grains. A patchwise surface charge model is often used to describe the surface charge heterogeneity of the grains. In the patchwise model, the colloidal attachment efficiency is linearly correlated with the fraction of the favorable patches (θ=λ(θf - θu)+θu). However, our previous microfluidic study showed that the attachment efficiency of oocysts of Cryptosporidium parvum, a waterborne protozoan parasite, was not linear correlated with the fraction of the favorable patches (λ). In this study, we developed a pore scale model to simulate colloidal transport and attachment on charge heterogeneous grains. The flow field was simulated using the LBM method and colloidal transport and attachment were simulated using the Lagrange particle tracking method. The pore scale model was calibrated with experimental results of colloidal and oocyst transport in microfluidic devices and was then used to simulate oocyst transport in charge heterogeneous porous media under a variety of environmental relative conditions, i.e. the fraction of favorable patchwise, ionic strength, and pH. The results of the pore scale simulations were used to evaluate the effect of surface charge heterogeneity on upscaling of oocyst transport from pore to continuum scale and to develop an applicable correlation between colloidal attachment efficiency and the fraction of the favorable patches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.
2015-09-14
Correlations between the elliptic or triangular flow coefficients v m (m=2 or 3) and other flow harmonics v n (n=2 to 5) are measured using √s NN=2.76 TeV Pb+Pb collision data collected in 2010 by the ATLAS experiment at the LHC, corresponding to an integrated luminosity of 7 μb -1. The v m-v n correlations are measured in midrapidity as a function of centrality, and, for events within the same centrality interval, as a function of event ellipticity or triangularity defined in a forward rapidity region. For events within the same centrality interval, v 3 is found to be anticorrelatedmore » with v 2 and this anticorrelation is consistent with similar anticorrelations between the corresponding eccentricities, ε 2 and ε 3. However, it is observed that v 4 increases strongly with v 2, and v 5 increases strongly with both v 2 and v 3. The trend and strength of the vm-vn correlations for n=4 and 5 are found to disagree with ε m-ε n correlations predicted by initial-geometry models. Instead, these correlations are found to be consistent with the combined effects of a linear contribution to vn and a nonlinear term that is a function of v 2 2 or of v 2v 3, as predicted by hydrodynamic models. A simple two-component fit is used to separate these two contributions. The extracted linear and nonlinear contributions to v 4 and v 5 are found to be consistent with previously measured event-plane correlations.« less
Correlation among extinction efficiency and other parameters in an aggregate dust model
NASA Astrophysics Data System (ADS)
Dhar, Tanuj Kumar; Sekhar Das, Himadri
2017-10-01
We study the extinction properties of highly porous Ballistic Cluster-Cluster Aggregate dust aggregates in a wide range of complex refractive indices (1.4≤ n≤ 2.0, 0.001≤ k≤ 1.0) and wavelengths (0.11 {{μ }}{{m}}≤ {{λ }}≤ 3.4 {{μ }} m). An attempt has been made for the first time to investigate the correlation among extinction efficiency ({Q}{ext}), composition of dust aggregates (n,k), wavelength of radiation (λ) and size parameter of the monomers (x). If k is fixed at any value between 0.001 and 1.0, {Q}{ext} increases with increase of n from 1.4 to 2.0. {Q}{ext} and n are correlated via linear regression when the cluster size is small, whereas the correlation is quadratic at moderate and higher sizes of the cluster. This feature is observed at all wavelengths (ultraviolet to optical to infrared). We also find that the variation of {Q}{ext} with n is very small when λ is high. When n is fixed at any value between 1.4 and 2.0, it is observed that {Q}{ext} and k are correlated via a polynomial regression equation (of degree 1, 2, 3 or 4), where the degree of the equation depends on the cluster size, n and λ. The correlation is linear for small size and quadratic/cubic/quartic for moderate and higher sizes. We have also found that {Q}{ext} and x are correlated via a polynomial regression (of degree 3, 4 or 5) for all values of n. The degree of regression is found to be n and k-dependent. The set of relations obtained from our work can be used to model interstellar extinction for dust aggregates in a wide range of wavelengths and complex refractive indices.
Turbulence modeling and experiments
NASA Technical Reports Server (NTRS)
Shabbir, Aamir
1992-01-01
The best way of verifying turbulence is to do a direct comparison between the various terms and their models. The success of this approach depends upon the availability of the data for the exact correlations (both experimental and DNS). The other approach involves numerically solving the differential equations and then comparing the results with the data. The results of such a computation will depend upon the accuracy of all the modeled terms and constants. Because of this it is sometimes difficult to find the cause of a poor performance by a model. However, such a calculation is still meaningful in other ways as it shows how a complete Reynolds stress model performs. Thirteen homogeneous flows are numerically computed using the second order closure models. We concentrate only on those models which use a linear (or quasi-linear) model for the rapid term. This, therefore, includes the Launder, Reece and Rodi (LRR) model; the isotropization of production (IP) model; and the Speziale, Sarkar, and Gatski (SSG) model. Which of the three models performs better is examined along with what are their weaknesses, if any. The other work reported deal with the experimental balances of the second moment equations for a buoyant plume. Despite the tremendous amount of activity toward the second order closure modeling of turbulence, very little experimental information is available about the budgets of the second moment equations. Part of the problem stems from our inability to measure the pressure correlations. However, if everything else appearing in these equations is known from the experiment, pressure correlations can be obtained as the closing terms. This is the closest we can come to in obtaining these terms from experiment, and despite the measurement errors which might be present in such balances, the resulting information will be extremely useful for the turbulence modelers. The purpose of this part of the work was to provide such balances of the Reynolds stress and heat flux equations for the buoyant plume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, D; Usmani, N; Sloboda, R
Purpose: To characterize the movement of implanted brachytherapy seeds due to transrectal ultrasound probe-induced prostate deformation and to estimate the effects on prostate dosimetry. Methods: Implanted probe-in and probe-removed seed distributions were reconstructed for 10 patients using C-arm fluoroscopy imaging. The prostate was delineated on ultrasound and registered to the fluoroscopy seeds using a visible subset of seeds and residual needle tracks. A linear tensor and shearing model correlated the seed movement with position. The seed movement model was used to infer the underlying prostate deformation and to simulate the prostate contour without probe compression. Changes in prostate and surrogatemore » urethra dosimetry were calculated. Results: Seed movement patterns reflecting elastic decompression, lateral shearing, and rectal bending were observed. Elastic decompression was characterized by anterior-posterior expansion and superior-inferior and lateral contractions. For lateral shearing, anterior movement up to 6 mm was observed for extraprostatic seeds in the lateral peripheral region. The average intra-prostatic seed movement was 1.3 mm, and the residual after linear modeling was 0.6 mm. Prostate D90 increased by 4 Gy on average (8 Gy max) and was correlated with elastic decompression. For selected patients, lateral shearing resulted in differential change in D90 of 7 Gy between anterior and posterior quadrants, and increase in whole prostate D90 of 4 Gy. Urethra D10 increased by 4 Gy. Conclusion: Seed movement upon probe removal was characterized. The proposed model captured the linear correlation between seed movement and position. Whole prostate dose coverage increased slightly, due to the small but systematic seed movement associated with elastic decompression. Lateral shearing movement increased dose coverage in the anterior-lateral region, at the expense of the posterior-lateral region. The effect on whole prostate D90 was smaller due to the subset of peripheral seeds involved, but lateral shearing movement can have greater consequences for local dose coverage.« less
NASA Astrophysics Data System (ADS)
Ragavan, Anpalaki J.; Adams, Dean V.
2009-06-01
Equilibrium constants for modeling surface precipitation of trivalent metal cations ( M) onto hydrous ferric oxide and calcite were estimated from linear correlations of standard state Gibbs free energies of formation, ( ΔGf,MvX(ss)0) of the surface precipitates. The surface precipitation reactions were derived from Farley et. al. [K.J. Farley, D.A. Dzombak, F.M.M. Morel, J. Colloid Interface Sci. 106 (1985) 226] surface precipitation model, which are based on surface complexation model coupled with solid solution representation for surface precipitation on the solid surface. The ΔGf,MvX(ss)0 values were correlated through the following linear free energy relations ΔGf,M(OH)3(ss)0-791.70r=0.1587ΔGn,M0-1273.07 and ΔGf,M2(CO3)3(ss)0-197.241r=0.278ΔGn,M0-1431.27 where 'ss' stands for the end-member solid component of surface precipitate, ΔGf,MvX(ss)0 is in kJ/mol, r is the Shannon-Prewitt radius of M in a given coordination state (nm), and ΔGn,M0 is the non-solvation contribution to the Gibbs free energy of formation of the aqueous M ion. Results indicate that the above surface precipitation correlations are useful tools where experimental data are not available.
Correlation between Gas Bubble Formation and Hydrogen Evolution Reaction Kinetics at Nanoelectrodes.
Chen, Qianjin; Luo, Long
2018-04-17
We report the correlation between H 2 gas bubble formation potential and hydrogen evolution reaction (HER) activity for Au and Pt nanodisk electrodes (NEs). Microkinetic models were formulated to obtain the HER kinetic information for individual Au and Pt NEs. We found that the rate-determining steps for the HER at Au and Pt NEs were the Volmer step and the Heyrovsky step, respectively. More interestingly, the standard rate constant ( k 0 ) of the rate-determining step was found to vary over 2 orders of magnitude for the same type of NEs. The observed variations indicate the HER activity heterogeneity at the nanoscale. Furthermore, we discovered a linear relationship between bubble formation potential ( E bubble ) and log( k 0 ) with a slope of 125 mV/decade for both Au and Pt NEs. As log ( k 0 ) increases, E bubble shifts linearly to more positive potentials, meaning NEs with higher HER activities form H 2 bubbles at less negative potentials. Our theoretical model suggests that such linear relationship is caused by the similar critical bubble formation condition for Au and Pt NEs with varied sizes. Our results have potential implications for using gas bubble formation to evaluate the HER activity distribution of nanoparticles in an ensemble.
Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.
Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786
Louys, Julien; Meloro, Carlo; Elton, Sarah; Ditchfield, Peter; Bishop, Laura C
2015-01-01
We test the performance of two models that use mammalian communities to reconstruct multivariate palaeoenvironments. While both models exploit the correlation between mammal communities (defined in terms of functional groups) and arboreal heterogeneity, the first uses a multiple multivariate regression of community structure and arboreal heterogeneity, while the second uses a linear regression of the principal components of each ecospace. The success of these methods means the palaeoenvironment of a particular locality can be reconstructed in terms of the proportions of heavy, moderate, light, and absent tree canopy cover. The linear regression is less biased, and more precisely and accurately reconstructs heavy tree canopy cover than the multiple multivariate model. However, the multiple multivariate model performs better than the linear regression for all other canopy cover categories. Both models consistently perform better than randomly generated reconstructions. We apply both models to the palaeocommunity of the Upper Laetolil Beds, Tanzania. Our reconstructions indicate that there was very little heavy tree cover at this site (likely less than 10%), with the palaeo-landscape instead comprising a mixture of light and absent tree cover. These reconstructions help resolve the previous conflicting palaeoecological reconstructions made for this site. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
Linear time series modeling of GPS-derived TEC observations over the Indo-Thailand region
NASA Astrophysics Data System (ADS)
Suraj, Puram Sai; Kumar Dabbakuti, J. R. K.; Chowdhary, V. Rajesh; Tripathi, Nitin K.; Ratnam, D. Venkata
2017-12-01
This paper proposes a linear time series model to represent the climatology of the ionosphere and to investigate the characteristics of hourly averaged total electron content (TEC). The GPS-TEC observation data at the Bengaluru international global navigation satellite system (GNSS) service (IGS) station (geographic 13.02°N , 77.57°E ; geomagnetic latitude 4.4°N ) have been utilized for processing the TEC data during an extended period (2009-2016) in the 24{th} solar cycle. Solar flux F10.7p index, geomagnetic Ap index, and periodic oscillation factors have been considered to construct a linear TEC model. It is evident from the results that solar activity effect on TEC is high. It reaches the maximum value (˜ 40 TECU) during the high solar activity (HSA) year (2014) and minimum value (˜ 15 TECU) during the low solar activity (LSA) year (2009). The larger magnitudes of semiannual variations are observed during the HSA periods. The geomagnetic effect on TEC is relatively low, with the highest being ˜ 4 TECU (March 2015). The magnitude of periodic variations can be seen more significantly during HSA periods (2013-2015) and less during LSA periods (2009-2011). The correlation coefficient of 0.89 between the observations and model-based estimations has been found. The RMSE between the observed TEC and model TEC values is 4.0 TECU (linear model) and 4.21 TECU (IRI2016 Model). Further, the linear TEC model has been validated at different latitudes over the northern low-latitude region. The solar component (F10.7p index) value decreases with an increase in latitude. The magnitudes of the periodic component become less significant with the increase in latitude. The influence of geomagnetic component becomes less significant at Lucknow GNSS station (26.76°N, 80.88°E) when compared to other GNSS stations. The hourly averaged TEC values have been considered and ionospheric features are well recovered with linear TEC model.
Estimation of stature from sternum - Exploring the quadratic models.
Saraf, Ashish; Kanchan, Tanuj; Krishan, Kewal; Ateriya, Navneet; Setia, Puneet
2018-04-14
Identification of the dead is significant in examination of unknown, decomposed and mutilated human remains. Establishing the biological profile is the central issue in such a scenario, and stature estimation remains one of the important criteria in this regard. The present study was undertaken to estimate stature from different parts of the sternum. A sample of 100 sterna was obtained from individuals during the medicolegal autopsies. Length of the deceased and various measurements of the sternum were measured. Student's t-test was performed to find the sex differences in stature and sternal measurements included in the study. Correlation between stature and sternal measurements were analysed using Karl Pearson's correlation, and linear and quadratic regression models were derived. All the measurements were found to be significantly larger in males than females. Stature correlated best with the combined length of sternum, among males (R = 0.894), females (R = 0.859), and for the total sample (R = 0.891). The study showed that the models derived for stature estimation from combined length of sternum are likely to give the most accurate estimates of stature in forensic case work when compared to manubrium and mesosternum. Accuracy of stature estimation further increased with quadratic models derived for the mesosternum among males and combined length of sternum among males and females when compared to linear regression models. Future studies in different geographical locations and a larger sample size are proposed to confirm the study observations. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Multivariate meta-analysis using individual participant data
Riley, R. D.; Price, M. J.; Jackson, D.; Wardle, M.; Gueyffier, F.; Wang, J.; Staessen, J. A.; White, I. R.
2016-01-01
When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment–covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. PMID:26099484
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dick, D; Zhao, W; Wu, X
2016-06-15
Purpose: To investigate the feasibility of tracking abdominal tumors without the use of gold fiducial markers Methods: In this simulation study, an abdominal 4DCT dataset, acquired previously and containing 8 phases of the breathing cycle, was used as the testing data. Two sets of DRR images (45 and 135 degrees) were generated for each phase. Three anatomical points along the lung-diaphragm interface on each of the Digital Reconstructed Radiograph(DRR) images were identified by cross-correlation. The gallbladder, which simulates the tumor, was contoured for each phase of the breathing cycle and the corresponding centroid values serve as the measured center ofmore » the tumor. A linear model was created to correlate the diaphragm’s disparity of the three identified anatomical points with the center of the tumor. To verify the established linear model, we sequentially removed one phase of the data (i.e., 3 anatomical points and the corresponding tumor center) and created new linear models with the remaining 7 phases. Then we substituted the eliminated phase data (disparities of the 3 anatomical points) into the corresponding model to compare model-generated tumor center and the measured tumor center. Results: The maximum difference between the modeled and the measured centroid values across the 8 phases were 0.72, 0.29 and 0.30 pixels in the x, y and z directions respectively, which yielded a maximum mean-squared-error value of 0.75 pixels. The outcomes of the verification process, by eliminating each phase, produced mean-squared-errors ranging from 0.41 to 1.28 pixels. Conclusion: Gold fiducial markers, requiring surgical procedures to be implanted, are conventionally used in radiation therapy. The present work shows the feasibility of a fiducial-less tracking method for localizing abdominal tumors. Through developed diaphragm disparity analysis, the established linear model was verified with clinically accepted errors. The tracking method in real time under different radiation therapy platforms will be further investigated.« less
Copula Entropy coupled with Wavelet Neural Network Model for Hydrological Prediction
NASA Astrophysics Data System (ADS)
Wang, Yin; Yue, JiGuang; Liu, ShuGuang; Wang, Li
2018-02-01
Artificial Neural network(ANN) has been widely used in hydrological forecasting. in this paper an attempt has been made to find an alternative method for hydrological prediction by combining Copula Entropy(CE) with Wavelet Neural Network(WNN), CE theory permits to calculate mutual information(MI) to select Input variables which avoids the limitations of the traditional linear correlation(LCC) analysis. Wavelet analysis can provide the exact locality of any changes in the dynamical patterns of the sequence Coupled with ANN Strong non-linear fitting ability. WNN model was able to provide a good fit with the hydrological data. finally, the hybrid model(CE+WNN) have been applied to daily water level of Taihu Lake Basin, and compared with CE ANN, LCC WNN and LCC ANN. Results showed that the hybrid model produced better results in estimating the hydrograph properties than the latter models.
Xiong, Jianyin; Yang, Tao; Tan, Jianwei; Li, Lan; Ge, Yunshan
2015-01-01
The steady state VOC concentration in automobile cabin is taken as a good indicator to characterize the material emission behaviors and evaluate the vehicular air quality. Most studies in this field focus on experimental investigation while theoretical analysis is lacking. In this paper we firstly develop a simplified physical model to describe the VOC emission from automobile materials, and then derive a theoretical correlation between the steady state cabin VOC concentration (C a) and temperature (T), which indicates that the logarithm of C a/T 0.75 is in a linear relationship with 1/T. Experiments of chemical emissions in three car cabins at different temperatures (24°C, 29°C, 35°C) were conducted. Eight VOCs specified in the Chinese National Standard GB/T 27630–2011 were taken for analysis. The good agreement between the correlation and experimental results from our tests, as well as the data taken from literature demonstrates the effectiveness of the derived correlation. Further study indicates that the slope and intercept of the correlation follows linear association. With the derived correlation, the steady state cabin VOC concentration different from the test conditions can be conveniently obtained. This study should be helpful for analyzing temperature-dependent emission phenomena in automobiles and predicting associated health risks. PMID:26452146
Effect of Malmquist bias on correlation studies with IRAS data base
NASA Technical Reports Server (NTRS)
Verter, Frances
1993-01-01
The relationships between galaxy properties in the sample of Trinchieri et al. (1989) are reexamined with corrections for Malmquist bias. The linear correlations are tested and linear regressions are fit for log-log plots of L(FIR), L(H-alpha), and L(B) as well as ratios of these quantities. The linear correlations for Malmquist bias are corrected using the method of Verter (1988), in which each galaxy observation is weighted by the inverse of its sampling volume. The linear regressions are corrected for Malmquist bias by a new method invented here in which each galaxy observation is weighted by its sampling volume. The results of correlation and regressions among the sample are significantly changed in the anticipated sense that the corrected correlation confidences are lower and the corrected slopes of the linear regressions are lower. The elimination of Malmquist bias eliminates the nonlinear rise in luminosity that has caused some authors to hypothesize additional components of FIR emission.
The role of spurious correlation in the development of a komatiite alteration model
NASA Astrophysics Data System (ADS)
Butler, John C.
1986-03-01
Current procedures for assessing the degree of alteration in komatiites stress the construction of variation diagrams in which ratios of molecular proportions of the oxides are the axes of reference. For example, it has been argued that unaltered komatiites related to each other by olivine fractionation will display a linear variation with a slope of 0.5 in the space defined by [SiO2/TiO2] and [(MgO+FeO)/TiO2]. Extensive metasomatism is expected to destroy such a consistent pattern. Previous workers have tended to make use of ratios that have a common denominator. It has been known for a long time that ratios formed from uncorrelated variables will be correlated (a so-called spurious correlation) if both ratios have a common denominator. The magnitude of this spurious correlation is a function of the coefficients of variation of the measured amounts of the variables. If the denominator component has a coefficient of variation that is larger than those of the numerator components, the spurious correlation will be close to unity; that is, there will be nearly a straight-line relationship. As a demonstration, a fictitious data set has been simulated so that the means and variances of SiO2, TiO2, and (MgO + FeO) match those of an observed data set but the components themselves are uncorrelated. A plot of (SiO2/TiO2) versus [(MgO + FeO)/TiO2] of these simulated data produces a distribution of points that appears every bit as convincing an illustration of the lack of significant metasomatism as does the plot of the observed data. The assessment of the strength of linear association is a test of the observed correlation against an expected value (the null value) of zero. When a spurious correlation arises as a result of the formulation of ratios with a common denominator, zero is clearly an inappropriate choice as the null. It can be argued that the spurious correlation is, in fact, a more suitable null value. An analysis of komatiites from Gorgona Island and the Barberton suite reveals that the strong linear association could have been produced by forming ratios from uncorrelated starting chemical components. Ratios without parts in common are to be preferred in the construction of petrogenetic models.
Entropy Conservation of Linear Dilaton Black Holes in Quantum Corrected Hawking Radiation
NASA Astrophysics Data System (ADS)
Sakalli, I.; Halilsoy, M.; Pasaoglu, H.
2011-10-01
It has been shown recently that information is lost in the Hawking radiation of the linear dilaton black holes in various theories when applying the tunneling formalism of Parikh and Wilczek without considering quantum gravity effects. In this paper, we recalculate the emission probability by taking into account the log-area correction to the Bekenstein-Hawking entropy and the statistical correlation between quanta emitted. The crucial role of the quantum gravity effects on the information leakage and black hole remnant is highlighted. The entropy conservation of the linear dilaton black holes is discussed in detail. We also model the remnant as an extreme linear dilaton black hole with a pointlike horizon in order to show that such a remnant cannot radiate and its temperature becomes zero. In summary, we show that the information can also leak out of the linear dilaton black holes together with preserving unitarity in quantum mechanics.
Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices
NASA Astrophysics Data System (ADS)
Passemier, Damien; McKay, Matthew R.; Chen, Yang
2015-07-01
Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.
Caucasian facial L* shifts may communicate anti-ageing efficacy.
Zedayko, T; Azriel, M; Kollias, N
2011-10-01
An ageing study was conducted to capture skin colour parameters in the CIELab system from Caucasians of both genders and all available adult ages. This study produced a linear correlation between L* and age for a Caucasian population between 20 and 59 years of age as follows: (L* value) = -0.13 × (Age in years) + 63.01. Previous studies have addressed age-related changes in skin colour. This work presents a novel consumer correlated quantitative linear model of skin brightness by which to communicate age-related changes. Two product assessment studies are also presented here, demonstrating the ability of anti-ageing products to deliver on objective and subjective improvements in skin brightness. It was determined to be possible to use the fundamental Caucasian L*-age correlation to describe product benefits in a novel quantitative and consumer-relevant fashion, through the depiction of a 'years back' calculation. © 2011 Johnson & Johnson Consumer Products Company. ICS © 2011 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Epiaortic fat pad area: A novel index for the dimensions of the ascending aorta.
Toufan, Mehrnoush; Pourafkari, Leili; Boudagh, Shabnam; Nader, Nader D
2016-06-01
We sought to investigate the possible association between the area of the epiaortic fat pad (EAFP) and dimensions of the ascending aorta. A total of 193 individuals underwent transthoracic echocardiography (TTE) prospectively. The area of the EAFP was traced anterior to the aortic root and correlated with the diameter of the aorta. The mean area of the EAFP was 5.16 ± 2.28 cm(2) Absolute and indexed dimensions of the ascending aorta had a significant correlation with the area of the EAFP (p <0.001 for all). In a multivariate linear regression model, age >65 (p <0.001), body mass index >30 kg/m(2) (p = 0.02) and a history of hyperlipidemia (p = 0.003) were identified as independent predictors of the area for EAFP. In conclusion, both the absolute and indexed diameters of the ascending aorta at the different segments that directly come into contact with the EAFP linearly correlate with the area of the EAFP measured by TTE. © The Author(s) 2016.
Clawson, Ashley H; McQuaid, Elizabeth L; Dunsiger, Shira; Bartlett, Kiera; Borrelli, Belinda
2018-04-01
This study examines the longitudinal relationships between child smoking and secondhand smoke exposure (SHSe). Participants were 222 parent-child dyads. The parents smoked, had a child with (48%) or without asthma, and were enrolled in a smoking/health intervention. Parent-reported child SHSe was measured at baseline and 4, 6, and 12-month follow-ups; self-reported child smoking was assessed at these points and at 2-months. A parallel process growth model was used. Baseline child SHSe and smoking were correlated (r = 0.30). Changes in child SHSe and child smoking moved in tandem as evidenced by a correlation between the linear slopes of child smoking and SHSe (r = 0.32), and a correlation between the linear slope of child smoking and the quadratic slope of child SHSe (r = - 0.44). Results may inform interventions with the potential to reduce child SHSe and smoking among children at increased risk due to their exposure to parental smoking.
The effect of topography of upper-mantle discontinuities on SS precursors
NASA Astrophysics Data System (ADS)
Koroni, Maria; Trampert, Jeannot
2016-01-01
Using the spectral-element method, we explored the effect of topography of upper-mantle discontinuities on the traveltimes of SS precursors recorded on transverse component seismograms. The latter are routinely used to infer the topography of mantle transition zone discontinuities. The step from precursory traveltimes to topographic changes is mainly done using linearised ray theory, or sometimes using finite-frequency kernels. We simulated exact seismograms in 1-D and 3-D elastic models of the mantle. In a second simulation, we added topography to the discontinuities. We compared the waveforms obtained with and without topography by cross correlation of the SS precursors. Since we did not add noise, the precursors are visible in individual seismograms without the need of stacking. The resulting time anomalies were then converted into topographic variations and compared to the original topographic models. Based on the correlation between initial and inferred models, and provided that ray coverage is good, we found that linearised ray theory gives a relatively good idea on the location of the uplifts and depressions of the discontinuities. It seriously underestimates the amplitude of the topographic variations by a factor ranging between 2 and 7. Real data depend on the 3-D elastic structure and the topography. All studies to date correct for the 3-D elastic effects assuming that the traveltimes can be linearly decomposed into a structure and a discontinuity part. We found a strong non-linearity in this decomposition which cannot be modelled without a fully non-linear inversion for elastic structure and discontinuities simultaneously.
Kernel canonical-correlation Granger causality for multiple time series
NASA Astrophysics Data System (ADS)
Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu
2011-04-01
Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.
NASA Technical Reports Server (NTRS)
Hairr, John W.; Dorris, William J.; Ingram, J. Edward; Shah, Bharat M.
1993-01-01
Interactive Stiffened Panel Analysis (ISPAN) modules, written in FORTRAN, were developed to provide an easy to use tool for creating finite element models of composite material stiffened panels. The modules allow the user to interactively construct, solve and post-process finite element models of four general types of structural panel configurations using only the panel dimensions and properties as input data. Linear, buckling and post-buckling solution capability is provided. This interactive input allows rapid model generation and solution by non finite element users. The results of a parametric study of a blade stiffened panel are presented to demonstrate the usefulness of the ISPAN modules. Also, a non-linear analysis of a test panel was conducted and the results compared to measured data and previous correlation analysis.
Carbonell, Felix; Bellec, Pierre
2011-01-01
Abstract The influence of the global average signal (GAS) on functional-magnetic resonance imaging (fMRI)–based resting-state functional connectivity is a matter of ongoing debate. The global average fluctuations increase the correlation between functional systems beyond the correlation that reflects their specific functional connectivity. Hence, removal of the GAS is a common practice for facilitating the observation of network-specific functional connectivity. This strategy relies on the implicit assumption of a linear-additive model according to which global fluctuations, irrespective of their origin, and network-specific fluctuations are super-positioned. However, removal of the GAS introduces spurious negative correlations between functional systems, bringing into question the validity of previous findings of negative correlations between fluctuations in the default-mode and the task-positive networks. Here we present an alternative method for estimating global fluctuations, immune to the complications associated with the GAS. Principal components analysis was applied to resting-state fMRI time-series. A global-signal effect estimator was defined as the principal component (PC) that correlated best with the GAS. The mean correlation coefficient between our proposed PC-based global effect estimator and the GAS was 0.97±0.05, demonstrating that our estimator successfully approximated the GAS. In 66 out of 68 runs, the PC that showed the highest correlation with the GAS was the first PC. Since PCs are orthogonal, our method provides an estimator of the global fluctuations, which is uncorrelated to the remaining, network-specific fluctuations. Moreover, unlike the regression of the GAS, the regression of the PC-based global effect estimator does not introduce spurious anti-correlations beyond the decrease in seed-based correlation values allowed by the assumed additive model. After regressing this PC-based estimator out of the original time-series, we observed robust anti-correlations between resting-state fluctuations in the default-mode and the task-positive networks. We conclude that resting-state global fluctuations and network-specific fluctuations are uncorrelated, supporting a Resting-State Linear-Additive Model. In addition, we conclude that the network-specific resting-state fluctuations of the default-mode and task-positive networks show artifact-free anti-correlations. PMID:22444074
Kendall, G M; Wakeford, R; Athanson, M; Vincent, T J; Carter, E J; McColl, N P; Little, M P
2016-03-01
Gamma radiation from natural sources (including directly ionising cosmic rays) is an important component of background radiation. In the present paper, indoor measurements of naturally occurring gamma rays that were undertaken as part of the UK Childhood Cancer Study are summarised, and it is shown that these are broadly compatible with an earlier UK National Survey. The distribution of indoor gamma-ray dose rates in Great Britain is approximately normal with mean 96 nGy/h and standard deviation 23 nGy/h. Directly ionising cosmic rays contribute about one-third of the total. The expanded dataset allows a more detailed description than previously of indoor gamma-ray exposures and in particular their geographical variation. Various strategies for predicting indoor natural background gamma-ray dose rates were explored. In the first of these, a geostatistical model was fitted, which assumes an underlying geologically determined spatial variation, superimposed on which is a Gaussian stochastic process with Matérn correlation structure that models the observed tendency of dose rates in neighbouring houses to correlate. In the second approach, a number of dose-rate interpolation measures were first derived, based on averages over geologically or administratively defined areas or using distance-weighted averages of measurements at nearest-neighbour points. Linear regression was then used to derive an optimal linear combination of these interpolation measures. The predictive performances of the two models were compared via cross-validation, using a randomly selected 70 % of the data to fit the models and the remaining 30 % to test them. The mean square error (MSE) of the linear-regression model was lower than that of the Gaussian-Matérn model (MSE 378 and 411, respectively). The predictive performance of the two candidate models was also evaluated via simulation; the OLS model performs significantly better than the Gaussian-Matérn model.
Ghosh, Animesh; Bhaumik, Uttam Kumar; Bose, Anirbandeep; Mandal, Uttam; Gowda, Veeran; Chatterjee, Bappaditya; Chakrabarty, Uday Sankar; Pal, Tapan Kumar
2008-10-01
Defining a quantitative and reliable relationship between in vitro drug release and in vivo absorption is highly desired for rational development, optimization, and evaluation of controlled-release dosage forms and manufacturing process. During the development of once daily extended-release (ER) tablet of glipizide, a predictive in vitro drug release method was designed and statistically evaluated using three formulations with varying release rates. In order to establish internally and externally validated level A in vitro-in vivo correlation (IVIVC), a total of three different ER formulations of glipizide were used to evaluate a linear IVIVC model based on the in vitro test method. For internal validation, a single-dose four-way cross over study (n=6) was performed using fast-, moderate-, and slow-releasing ER formulations and an immediate-release (IR) of glipizide as reference. In vitro release rate data were obtained for each formulation using the United States Pharmacopeia (USP) apparatus II, paddle stirrer at 50 and 100 rev. min(-1) in 0.1 M hydrochloric acid (HCl) and pH 6.8 phosphate buffer. The f(2) metric (similarity factor) was used to analyze the dissolution data. The formulations were compared using area under the plasma concentration-time curve, AUC(0-infinity), time to reach peak plasma concentration, T(max), and peak plasma concentration, C(max), while correlation was determined between in vitro release and in vivo absorption. A linear correlation model was developed using percent absorbed data versus percent dissolved from the three formulations. Predicted glipizide concentrations were obtained by convolution of the in vivo absorption rates. Prediction errors were estimated for C(max) and AUC(0-infinity) to determine the validity of the correlation. Apparatus II, pH 6.8 at 100 rev. min(-1) was found to be the most discriminating dissolution method. Linear regression analysis of the mean percentage of dose absorbed versus the mean percentage of in vitro release resulted in a significant correlation (r(2)>or=0.9) for the three formulations.
Minati, Ludovico; Chiesa, Pietro; Tabarelli, Davide; D'Incerti, Ludovico
2015-01-01
In this paper, the topographical relationship between functional connectivity (intended as inter-regional synchronization), spectral and non-linear dynamical properties across cortical areas of the healthy human brain is considered. Based upon functional MRI acquisitions of spontaneous activity during wakeful idleness, node degree maps are determined by thresholding the temporal correlation coefficient among all voxel pairs. In addition, for individual voxel time-series, the relative amplitude of low-frequency fluctuations and the correlation dimension (D2), determined with respect to Fourier amplitude and value distribution matched surrogate data, are measured. Across cortical areas, high node degree is associated with a shift towards lower frequency activity and, compared to surrogate data, clearer saturation to a lower correlation dimension, suggesting presence of non-linear structure. An attempt to recapitulate this relationship in a network of single-transistor oscillators is made, based on a diffusive ring (n = 90) with added long-distance links defining four extended hub regions. Similarly to the brain data, it is found that oscillators in the hub regions generate signals with larger low-frequency cycle amplitude fluctuations and clearer saturation to a lower correlation dimension compared to surrogates. The effect emerges more markedly close to criticality. The homology observed between the two systems despite profound differences in scale, coupling mechanism and dynamics appears noteworthy. These experimental results motivate further investigation into the heterogeneity of cortical non-linear dynamics in relation to connectivity and underline the ability for small networks of single-transistor oscillators to recreate collective phenomena arising in much more complex biological systems, potentially representing a future platform for modelling disease-related changes. PMID:25833429
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minati, Ludovico, E-mail: lminati@ieee.org, E-mail: ludovico.minati@unitn.it, E-mail: lminati@istituto-besta.it; Center for Mind/Brain Sciences, University of Trento, Trento; Chiesa, Pietro
In this paper, the topographical relationship between functional connectivity (intended as inter-regional synchronization), spectral and non-linear dynamical properties across cortical areas of the healthy human brain is considered. Based upon functional MRI acquisitions of spontaneous activity during wakeful idleness, node degree maps are determined by thresholding the temporal correlation coefficient among all voxel pairs. In addition, for individual voxel time-series, the relative amplitude of low-frequency fluctuations and the correlation dimension (D{sub 2}), determined with respect to Fourier amplitude and value distribution matched surrogate data, are measured. Across cortical areas, high node degree is associated with a shift towards lower frequencymore » activity and, compared to surrogate data, clearer saturation to a lower correlation dimension, suggesting presence of non-linear structure. An attempt to recapitulate this relationship in a network of single-transistor oscillators is made, based on a diffusive ring (n = 90) with added long-distance links defining four extended hub regions. Similarly to the brain data, it is found that oscillators in the hub regions generate signals with larger low-frequency cycle amplitude fluctuations and clearer saturation to a lower correlation dimension compared to surrogates. The effect emerges more markedly close to criticality. The homology observed between the two systems despite profound differences in scale, coupling mechanism and dynamics appears noteworthy. These experimental results motivate further investigation into the heterogeneity of cortical non-linear dynamics in relation to connectivity and underline the ability for small networks of single-transistor oscillators to recreate collective phenomena arising in much more complex biological systems, potentially representing a future platform for modelling disease-related changes.« less
NASA Astrophysics Data System (ADS)
Boehm, Holger F.; Link, Thomas M.; Monetti, Roberto A.; Mueller, Dirk; Rummeny, Ernst J.; Raeth, Christoph W.
2005-04-01
Osteoporosis is a metabolic bone disease leading to de-mineralization and increased risk of fracture. The two major factors that determine the biomechanical competence of bone are the degree of mineralization and the micro-architectural integrity. Today, modern imaging modalities (high resolution MRI, micro-CT) are capable of depicting structural details of trabecular bone tissue. From the image data, structural properties obtained by quantitative measures are analysed with respect to the presence of osteoporotic fractures of the spine (in-vivo) or correlated with biomechanical strength as derived from destructive testing (in-vitro). Fairly well established are linear structural measures in 2D that are originally adopted from standard histo-morphometry. Recently, non-linear techniques in 2D and 3D based on the scaling index method (SIM), the standard Hough transform (SHT), and the Minkowski Functionals (MF) have been introduced, which show excellent performance in predicting bone strength and fracture risk. However, little is known about the performance of the various parameters with respect to monitoring structural changes due to progression of osteoporosis or as a result of medical treatment. In this contribution, we generate models of trabecular bone with pre-defined structural properties which are exposed to simulated osteoclastic activity. We apply linear and non-linear texture measures to the models and analyse their performance with respect to detecting architectural changes. This study demonstrates, that the texture measures are capable of monitoring structural changes of complex model data. The diagnostic potential varies for the different parameters and is found to depend on the topological composition of the model and initial "bone density". In our models, non-linear texture measures tend to react more sensitively to small structural changes than linear measures. Best performance is observed for the 3rd and 4th Minkowski Functionals and for the scaling index method.
The symmetry energy {\\boldsymbol{\\gamma }} parameter of relativistic mean-field models
NASA Astrophysics Data System (ADS)
Dutra, Mariana; Lourenço, Odilon; Hen, Or; Piasetzky, Eliezer; Menezes, Débora P.
2018-05-01
The relativistic mean-field models tested in previous works against nuclear matter experimental values, critical parameters and macroscopic stellar properties are revisited and used in the evaluation of the symmetry energy γ parameter obtained in three different ways. We have checked that, independent of the choice made to calculate the γ values, a trend of linear correlation is observed between γ and the symmetry energy ({{\\mathscr{S}}}0) and a more clear linear relationship is established between γ and the slope of the symmetry energy (L 0). These results directly contribute to the arising of other linear correlations between γ and the neutron star radii of {R}1.0 and {R}1.4, in agreement with recent findings. Finally, we have found that short-range correlations induce two specific parametrizations, namely, IU-FSU and DD-MEδ, simultaneously compatible with the neutron star mass constraint of 1.93≤slant {M}{{\\max }}/{M}ȯ ≤slant 2.05 and with the overlap band for the {L}0× {{\\mathscr{S}}}0 region, to present γ in the range of γ =0.25+/- 0.05. This work is a part of the project INCT-FNA Proc. No. 464898/2014-5 and was partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Brazil under grants 300602/2009-0 and 306786/2014-1. E. P. acknowledges support from the Israel Science Foundation. O. H. acknowledges the U.S. Department of Energy Office of Science, Office of Nuclear Physics program under award number DE-FG02-94ER40818
Marrero-Ponce, Yovani; Medina-Marrero, Ricardo; Castillo-Garit, Juan A; Romero-Zaldivar, Vicente; Torrens, Francisco; Castro, Eduardo A
2005-04-15
A novel approach to bio-macromolecular design from a linear algebra point of view is introduced. A protein's total (whole protein) and local (one or more amino acid) linear indices are a new set of bio-macromolecular descriptors of relevance to protein QSAR/QSPR studies. These amino-acid level biochemical descriptors are based on the calculation of linear maps on Rn[f k(xmi):Rn-->Rn] in canonical basis. These bio-macromolecular indices are calculated from the kth power of the macromolecular pseudograph alpha-carbon atom adjacency matrix. Total linear indices are linear functional on Rn. That is, the kth total linear indices are linear maps from Rn to the scalar R[f k(xm):Rn-->R]. Thus, the kth total linear indices are calculated by summing the amino-acid linear indices of all amino acids in the protein molecule. A study of the protein stability effects for a complete set of alanine substitutions in the Arc repressor illustrates this approach. A quantitative model that discriminates near wild-type stability alanine mutants from the reduced-stability ones in a training series was obtained. This model permitted the correct classification of 97.56% (40/41) and 91.67% (11/12) of proteins in the training and test set, respectively. It shows a high Matthews correlation coefficient (MCC=0.952) for the training set and an MCC=0.837 for the external prediction set. Additionally, canonical regression analysis corroborated the statistical quality of the classification model (Rcanc=0.824). This analysis was also used to compute biological stability canonical scores for each Arc alanine mutant. On the other hand, the linear piecewise regression model compared favorably with respect to the linear regression one on predicting the melting temperature (tm) of the Arc alanine mutants. The linear model explains almost 81% of the variance of the experimental tm (R=0.90 and s=4.29) and the LOO press statistics evidenced its predictive ability (q2=0.72 and scv=4.79). Moreover, the TOMOCOMD-CAMPS method produced a linear piecewise regression (R=0.97) between protein backbone descriptors and tm values for alanine mutants of the Arc repressor. A break-point value of 51.87 degrees C characterized two mutant clusters and coincided perfectly with the experimental scale. For this reason, we can use the linear discriminant analysis and piecewise models in combination to classify and predict the stability of the mutant Arc homodimers. These models also permitted the interpretation of the driving forces of such folding process, indicating that topologic/topographic protein backbone interactions control the stability profile of wild-type Arc and its alanine mutants.
Modeling the use of microwave energy in sensing of moisture content in vidalia onions
USDA-ARS?s Scientific Manuscript database
Microwave moisture sensing provides a means to nondestructively determine the amount of water in materials. This is accomplished through the correlation of dielectric constant and loss factor with moisture content in the material. In this study, linear relationships between a density-independent fun...
Chen, Gang; Taylor, Paul A.; Shin, Yong-Wook; Reynolds, Richard C.; Cox, Robert W.
2016-01-01
It has been argued that naturalistic conditions in FMRI studies provide a useful paradigm for investigating perception and cognition through a synchronization measure, inter-subject correlation (ISC). However, one analytical stumbling block has been the fact that the ISC values associated with each single subject are not independent, and our previous paper (Chen et al., 2016) used simulations and analyses of real data to show that the methodologies adopted in the literature do not have the proper control for false positives. In the same paper, we proposed nonparametric subject-wise bootstrapping and permutation testing techniques for one and two groups, respectively, which account for the correlation structure, and these greatly outperformed the prior methods in controlling the false positive rate (FPR); that is, subject-wise bootstrapping (SWB) worked relatively well for both cases with one and two groups, and subject-wise permutation (SWP) testing was virtually ideal for group comparisons. Here we seek to explicate and adopt a parametric approach through linear mixed-effects (LME) modeling for studying the ISC values, building on the previous correlation framework, with the benefit that the LME platform offers wider adaptability, more powerful interpretations, and quality control checking capability than nonparametric methods. We describe both theoretical and practical issues involved in the modeling and the manner in which LME with crossed random effects (CRE) modeling is applied. A data-doubling step further allows us to conveniently track the subject index, and achieve easy implementations. We pit the LME approach against the best nonparametric methods, and find that the LME framework achieves proper control for false positives. The new LME methodologies are shown to be both efficient and robust, and they will be added as an additional option and settings in an existing open source program, 3dLME, in AFNI (http://afni.nimh.nih.gov). PMID:27751943
Describing a Strongly Correlated Model System with Density Functional Theory.
Kong, Jing; Proynov, Emil; Yu, Jianguo; Pachter, Ruth
2017-07-06
The linear chain of hydrogen atoms, a basic prototype for the transition from a metal to Mott insulator, is studied with a recent density functional theory model functional for nondynamic and strong correlation. The computed cohesive energy curve for the transition agrees well with accurate literature results. The variation of the electronic structure in this transition is characterized with a density functional descriptor that yields the atomic population of effectively localized electrons. These new methods are also applied to the study of the Peierls dimerization of the stretched even-spaced Mott insulator to a chain of H 2 molecules, a different insulator. The transitions among the two insulating states and the metallic state of the hydrogen chain system are depicted in a semiquantitative phase diagram. Overall, we demonstrate the capability of studying strongly correlated materials with a mean-field model at the fundamental level, in contrast to the general pessimistic view on such a feasibility.
NASA Astrophysics Data System (ADS)
Chen, Hua-cai; Chen, Xing-dan; Lu, Yong-jun; Cao, Zhi-qiang
2006-01-01
Near infrared (NIR) reflectance spectroscopy was used to develop a fast determination method for total ginsenosides in Ginseng (Panax Ginseng) powder. The spectra were analyzed with multiplicative signal correction (MSC) correlation method. The best correlative spectra region with the total ginsenosides content was 1660 nm~1880 nm and 2230nm~2380 nm. The NIR calibration models of ginsenosides were built with multiple linear regression (MLR), principle component regression (PCR) and partial least squares (PLS) regression respectively. The results showed that the calibration model built with PLS combined with MSC and the optimal spectrum region was the best one. The correlation coefficient and the root mean square error of correction validation (RMSEC) of the best calibration model were 0.98 and 0.15% respectively. The optimal spectrum region for calibration was 1204nm~2014nm. The result suggested that using NIR to rapidly determinate the total ginsenosides content in ginseng powder were feasible.
Smooth individual level covariates adjustment in disease mapping.
Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise
2018-05-01
Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mahmoud, M; Yin, T; Brügemann, K; König, S
2017-03-01
A total of 31,396 females born from 2010 to 2013 in 43 large-scale Holstein-Friesian herds were phenotyped for calf and cow disease traits using a veterinarian diagnosis key. Calf diseases were general disease status (cGDS), calf diarrhea (cDIA), and calf respiratory disease (cRD) recorded from birth to 2 mo of age. Incidences were 0.48 for cGDS, 0.28 for cRD, and 0.21 for cDIA. Cow disease trait recording focused on the early period directly after calving in first parity, including the interval from 10 d before calving to 200 d in lactation. For cows, at least one entry for the respective disease implied a score = 1 (sick); otherwise, score = 0 (healthy). Corresponding cow diseases were first-lactation general disease status (flGDS), first-lactation diarrhea (flDIA), and first-lactation respiratory disease (flRD). Additional cow disease categories included mastitis (flMAST), claw disorders (flCLAW), female fertility disorders (flFF), and metabolic disorders (flMET). A further cow trait category considered first-lactation test-day production traits from official test-days 1 and 2 after calving. The genotype data set included 41,256 single nucleotide polymorphisms (SNP) from 9,388 females with phenotypes. Linear and generalized linear mixed models with a logit link-function were applied to Gaussian and categorical cow traits, respectively, considering the calf disease as a fixed effect. Most of the calf diseases were not significantly associated with the occurrence of any cow disease. By trend, increasing risks for the occurrence of cow diseases were observed for healthy calves, indicating mechanisms of disease resistance with aging. Also by trend, occurrence of calf diseases was associated with decreasing milk, protein, and fat yields. Univariate linear and threshold animal models were used to estimate heritabilities and breeding values (EBV) for all calf and cow traits. Heritabilities for cGDS and cRD were 0.06 and 0.07 for cDIA. Genetic correlations among all traits were estimated using linear-linear animal models in a series of bivariate runs. The genetic correlation between cDIA and cRD was 0.29. Apart from the genetic correlation between flRD with cGDS (-0.38), EBV correlations and genetic correlations between calf diseases with all cow traits were close to zero. Genome-wide association studies were applied to estimate SNP effects for cRD and cDIA, and for the corresponding traits observed in cows (flRD and flDIA). Different significant SNP markers contributed to cDIA and flDIA, or to cRD and flRD. The average correlation coefficient between cRD and flRD considering SNP effects from all chromosomes was 0.01, and between cDIA and flDIA was -0.04. In conclusion, calf diseases are not appropriate early predictors for cow traits during the early lactation stage in parity 1. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
MacNab, Ying C
2016-08-01
This paper concerns with multivariate conditional autoregressive models defined by linear combination of independent or correlated underlying spatial processes. Known as linear models of coregionalization, the method offers a systematic and unified approach for formulating multivariate extensions to a broad range of univariate conditional autoregressive models. The resulting multivariate spatial models represent classes of coregionalized multivariate conditional autoregressive models that enable flexible modelling of multivariate spatial interactions, yielding coregionalization models with symmetric or asymmetric cross-covariances of different spatial variation and smoothness. In the context of multivariate disease mapping, for example, they facilitate borrowing strength both over space and cross variables, allowing for more flexible multivariate spatial smoothing. Specifically, we present a broadened coregionalization framework to include order-dependent, order-free, and order-robust multivariate models; a new class of order-free coregionalized multivariate conditional autoregressives is introduced. We tackle computational challenges and present solutions that are integral for Bayesian analysis of these models. We also discuss two ways of computing deviance information criterion for comparison among competing hierarchical models with or without unidentifiable prior parameters. The models and related methodology are developed in the broad context of modelling multivariate data on spatial lattice and illustrated in the context of multivariate disease mapping. The coregionalization framework and related methods also present a general approach for building spatially structured cross-covariance functions for multivariate geostatistics. © The Author(s) 2016.
A novel, microscope based, non invasive Laser Doppler flowmeter for choroidal blood flow assessment
Strohmaier, C; Werkmeister, RM; Bogner, B; Runge, C; Schroedl, F; Brandtner, H; Radner, W; Schmetterer, L; Kiel, JW; Grabnerand, G; Reitsamer, HA
2015-01-01
Impaired ocular blood flow is involved in the pathogenesis of numerous ocular diseases like glaucoma or AMD. The purpose of the present study was to introduce and validate a novel, microscope based, non invasive laser Doppler flowmeter (NILDF) for measurement of blood flow in the choroid. The custom made NI-LDF was compared with a commercial fiber optic based laser Doppler flowmeter (Perimed PF4000). Linearity and stability of the NI-LDF were assessed in a silastic tubing model (i.d. 0.3 mm) at different flow rates (range 0.4 – 3 ml/h). In a rabbit model continuous choroidal blood flow measurements were performed with both instruments simultaneously. During blood flow measurements ocular perfusion pressure was changed by manipulations of intraocular pressure via intravitreal saline infusions. The NILDF measurement correlated linearly to intraluminal flow rates in the perfused tubing model (r = 0.99, p<0.05) and remained stable during a 1 hour measurement at a constant flow rate. Rabbit choroidal blood flow measured by the PF4000 and the NI-LDF linearly correlated with each other over the entire measurement range (r = 0.99, y = x* 1,01 – 12,35 P.U., p < 0,001). In conclusion, the NI-LDF provides valid, semi quantitative measurements of capillary blood flow in comparison to an established LDF instrument and is suitable for measurements at the posterior pole of the eye. PMID:21443871
NASA Astrophysics Data System (ADS)
Velten, Hermano; Fazolo, Raquel Emy; von Marttens, Rodrigo; Gomes, Syrios
2018-05-01
As recently pointed out in [Phys. Rev. D 96, 083502 (2017), 10.1103/PhysRevD.96.083502] the evolution of the linear matter perturbations in nonadiabatic dynamical dark energy models is almost indistinguishable (quasidegenerated) to the standard Λ CDM scenario. In this work we extend this analysis to CMB observables in particular the integrated Sachs-Wolfe effect and its cross-correlation with large scale structure. We find that this feature persists for such CMB related observable reinforcing that new probes and analysis are necessary to reveal the nonadiabatic features in the dark energy sector.
MWASTools: an R/bioconductor package for metabolome-wide association studies.
Rodriguez-Martinez, Andrea; Posma, Joram M; Ayala, Rafael; Neves, Ana L; Anwar, Maryam; Petretto, Enrico; Emanueli, Costanza; Gauguier, Dominique; Nicholson, Jeremy K; Dumas, Marc-Emmanuel
2018-03-01
MWASTools is an R package designed to provide an integrated pipeline to analyse metabonomic data in large-scale epidemiological studies. Key functionalities of our package include: quality control analysis; metabolome-wide association analysis using various models (partial correlations, generalized linear models); visualization of statistical outcomes; metabolite assignment using statistical total correlation spectroscopy (STOCSY); and biological interpretation of metabolome-wide association studies results. The MWASTools R package is implemented in R (version > =3.4) and is available from Bioconductor: https://bioconductor.org/packages/MWASTools/. m.dumas@imperial.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Role of protein fluctuation correlations in electron transfer in photosynthetic complexes.
Nesterov, Alexander I; Berman, Gennady P
2015-04-01
We consider the dependence of the electron transfer in photosynthetic complexes on correlation properties of random fluctuations of the protein environment. The electron subsystem is modeled by a finite network of connected electron (exciton) sites. The fluctuations of the protein environment are modeled by random telegraph processes, which act either collectively (correlated) or independently (uncorrelated) on the electron sites. We derived an exact closed system of first-order linear differential equations with constant coefficients, for the average density matrix elements and for their first moments. Under some conditions, we obtained analytic expressions for the electron transfer rates and found the range of parameters for their applicability by comparing with the exact numerical simulations. We also compared the correlated and uncorrelated regimes and demonstrated numerically that the uncorrelated fluctuations of the protein environment can, under some conditions, either increase or decrease the electron transfer rates.
Theory of Financial Risk and Derivative Pricing
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe; Potters, Marc
2009-01-01
Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.
Theory of Financial Risk and Derivative Pricing - 2nd Edition
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe; Potters, Marc
2003-12-01
Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.
Iino, Fukuya; Takasuga, Takumi; Touati, Abderrahmane; Gullett, Brian K
2003-01-01
The toxic equivalency (TEQ) values of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) are predicted with a model based on the homologue concentrations measured from a laboratory-scale reactor (124 data points), a package boiler (61 data points), and operating municipal waste incinerators (114 data points). Regardless of the three scales and types of equipment, the different temperature profiles, sampling emissions and/or solids (fly ash), and the various chemical and physical properties of the fuels, all the PCDF plots showed highly linear correlations (R(2)>0.99). The fitting lines of the reactor and the boiler data were almost linear with slope of unity, whereas the slope of the municipal waste incinerator data was 0.86, which is caused by higher predicted values for samples with high measured TEQ. The strong correlation also implies that each of the 10 toxic PCDF congeners has a constant concentration relative to its respective total homologue concentration despite a wide range of facility types and combustion conditions. The PCDD plots showed significant scatter and poor linearity, which implies that the relative concentration of PCDD TEQ congeners is more sensitive to variations in reaction conditions than that of the PCDF congeners.
Weaver, Brian Thomas; Fitzsimons, Kathleen; Braman, Jerrod; Haut, Roger
2016-09-01
The goal of the current study was to expand on previous work to validate the use of pressure insole technology in conjunction with linear regression models to predict the free torque at the shoe-surface interface that is generated while wearing different athletic shoes. Three distinctly different shoe designs were utilised. The stiffness of each shoe was determined with a material's testing machine. Six participants wore each shoe that was fitted with an insole pressure measurement device and performed rotation trials on an embedded force plate. A pressure sensor mask was constructed from those sensors having a high linear correlation with free torque values. Linear regression models were developed to predict free torques from these pressure sensor data. The models were able to accurately predict their own free torque well (RMS error 3.72 ± 0.74 Nm), but not that of the other shoes (RMS error 10.43 ± 3.79 Nm). Models performing self-prediction were also able to measure differences in shoe stiffness. The results of the current study showed the need for participant-shoe specific linear regression models to insure high prediction accuracy of free torques from pressure sensor data during isolated internal and external rotations of the body with respect to a planted foot.
Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer
2016-01-01
The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.
Linear-time general decoding algorithm for the surface code
NASA Astrophysics Data System (ADS)
Darmawan, Andrew S.; Poulin, David
2018-05-01
A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.
Wang, Yun; Huang, Fangzhou
2018-01-01
The selection of feature genes with high recognition ability from the gene expression profiles has gained great significance in biology. However, most of the existing methods have a high time complexity and poor classification performance. Motivated by this, an effective feature selection method, called supervised locally linear embedding and Spearman's rank correlation coefficient (SLLE-SC2), is proposed which is based on the concept of locally linear embedding and correlation coefficient algorithms. Supervised locally linear embedding takes into account class label information and improves the classification performance. Furthermore, Spearman's rank correlation coefficient is used to remove the coexpression genes. The experiment results obtained on four public tumor microarray datasets illustrate that our method is valid and feasible. PMID:29666661
Xu, Jiucheng; Mu, Huiyu; Wang, Yun; Huang, Fangzhou
2018-01-01
The selection of feature genes with high recognition ability from the gene expression profiles has gained great significance in biology. However, most of the existing methods have a high time complexity and poor classification performance. Motivated by this, an effective feature selection method, called supervised locally linear embedding and Spearman's rank correlation coefficient (SLLE-SC 2 ), is proposed which is based on the concept of locally linear embedding and correlation coefficient algorithms. Supervised locally linear embedding takes into account class label information and improves the classification performance. Furthermore, Spearman's rank correlation coefficient is used to remove the coexpression genes. The experiment results obtained on four public tumor microarray datasets illustrate that our method is valid and feasible.
Investigating the unification of LOFAR-detected powerful AGN in the Boötes field
NASA Astrophysics Data System (ADS)
Morabito, Leah K.; Williams, W. L.; Duncan, Kenneth J.; Röttgering, H. J. A.; Miley, George; Saxena, Aayush; Barthel, Peter; Best, P. N.; Bruggen, M.; Brunetti, G.; Chyży, K. T.; Engels, D.; Hardcastle, M. J.; Harwood, J. J.; Jarvis, Matt J.; Mahony, E. K.; Prandoni, I.; Shimwell, T. W.; Shulevski, A.; Tasse, C.
2017-08-01
Low radio frequency surveys are important for testing unified models of radio-loud quasars and radio galaxies. Intrinsically similar sources that are randomly oriented on the sky will have different projected linear sizes. Measuring the projected linear sizes of these sources provides an indication of their orientation. Steep-spectrum isotropic radio emission allows for orientation-free sample selection at low radio frequencies. We use a new radio survey of the Boötes field at 150 MHz made with the Low-Frequency Array (LOFAR) to select a sample of radio sources. We identify 60 radio sources with powers P > 1025.5 W Hz-1 at 150 MHz using cross-matched multiwavelength information from the AGN and Galaxy Evolution Survey, which provides spectroscopic redshifts and photometric identification of 16 quasars and 44 radio galaxies. When considering the radio spectral slope only, we find that radio sources with steep spectra have projected linear sizes that are on average 4.4 ± 1.4 larger than those with flat spectra. The projected linear sizes of radio galaxies are on average 3.1 ± 1.0 larger than those of quasars (2.0 ± 0.3 after correcting for redshift evolution). Combining these results with three previous surveys, we find that the projected linear sizes of radio galaxies and quasars depend on redshift but not on power. The projected linear size ratio does not correlate with either parameter. The LOFAR data are consistent within the uncertainties with theoretical predictions of the correlation between the quasar fraction and linear size ratio, based on an orientation-based unification scheme.
Associations between immunological function and memory recall in healthy adults.
Wang, Grace Y; Taylor, Tamasin; Sumich, Alexander; Merien, Fabrice; Borotkanics, Robert; Wrapson, Wendy; Krägeloh, Chris; Siegert, Richard J
2017-12-01
Studies in clinical and aging populations support associations between immunological function, cognition and mood, although these are not always in line with animal models. Moreover, very little is known about the relationship between immunological measures and cognition in healthy young adults. The present study tested associations between the state of immune system and memory recall in a group of relatively healthy adults. Immediate and delayed memory recall was assessed in 30 participants using the computerised cognitive battery. CD4, CD8 and CD69 subpopulations of lymphocytes, Interleukin-6 (IL-6) and cortisol were assessed with blood assays. Correlation analysis showed significant negative relationships between CD4 and the short and long delay memory measures. IL-6 showed a significant positive correlation with long-delay recall. Generalized linear models found associations between differences in all recall challenges and CD4. A multivariate generalized linear model including CD4 and IL-6 exhibited a stronger association. Results highlight the interactions between CD4 and IL-6 in relation to memory function. Further study is necessary to determine the underlying mechanisms of the associations between the state of immune system and cognitive performance. Copyright © 2017 Elsevier Inc. All rights reserved.
Chamber study of PCB emissions from caulking materials and light ballasts.
Liu, Xiaoyu; Guo, Zhishi; Krebs, Kenneth A; Stinson, Rayford A; Nardin, Joshua A; Pope, Robert H; Roache, Nancy F
2015-10-01
The emissions of polychlorinated biphenyl (PCB) congeners from thirteen caulk samples were tested in a micro-chamber system. Twelve samples were from PCB-contaminated buildings and one was prepared in the laboratory. Nineteen light ballasts collected from buildings that represent 13 different models from five manufacturers were tested in 53-L environmental chambers. The rates of PCB congener emissions from caulking materials and light ballasts were determined. Several factors that may have affected the emission rates were evaluated. The experimentally determined emission factors showed that, for a given PCB congener, there is a linear correlation between the emission factor and the concentration of the PCB congener in the source. Furthermore, the test results showed that an excellent log-linear correlation exists between the normalized emission factor and the vapor pressure (coefficient of determination, r(2)⩾0.8846). The PCB congener emissions from ballasts at or near room temperature were relatively low with or without electrical load. However, the PCB congener emission rates increased significantly as the temperature increased. The results of this research provide new data and models for ranking the primary sources of PCBs and supports the development and refinement of exposure assessment models for PCBs. Published by Elsevier Ltd.
Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.
2015-01-01
Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565
NASA Astrophysics Data System (ADS)
Valentić, Nataša V.; Vitnik, Željko; Kozhushkov, Sergei I.; de Meijere, Armin; Ušćumlić, Gordana S.; Juranić, Ivan O.
2005-06-01
Linear free energy relationships (LFER) were applied to the 1H and 13C NMR chemical shifts ( δN, N= 1H and 13C, respectively) in the unsaturated backbone of cross-conjugated trienes 3-methylene-2-substituted-1,4-pentadienes. The NMR data were correlated using five different LFER models, based on the mono, the dual and the triple substituent parameter (MSP, DSP and TSP, respectively) treatment. The simple and extended Hammett equations, and the three postulated unconventional LFER models obtained by adaptation of the later, were used. The geometry data, which are needed in Karplus-type and McConnell-type analysis, were obtained using semi-empirical MNDO-PM3 calculations. In correlating the data the TSP approach was more successful than the MSP and DSP approaches. The fact that the calculated molecular geometries allow accurate prediction of the NMR data confirms the validity of unconventional LFER models used. These results suggest the s- cis conformation of the cross-conjugated triene as the preferred one. Postulated unconventional DSP and TSP equations enable the assessment of electronic substituent effects in the presence of other interfering influences.
Interval Timing Accuracy and Scalar Timing in C57BL/6 Mice
Buhusi, Catalin V.; Aziz, Dyana; Winslow, David; Carter, Rickey E.; Swearingen, Joshua E.; Buhusi, Mona C.
2010-01-01
In many species, interval timing behavior is accurate—appropriate estimated durations—and scalar—errors vary linearly with estimated durations. While accuracy has been previously examined, scalar timing has not been yet clearly demonstrated in house mice (Mus musculus), raising concerns about mouse models of human disease. We estimated timing accuracy and precision in C57BL/6 mice, the most used background strain for genetic models of human disease, in a peak-interval procedure with multiple intervals. Both when timing two intervals (Experiment 1) or three intervals (Experiment 2), C57BL/6 mice demonstrated varying degrees of timing accuracy. Importantly, both at individual and group level, their precision varied linearly with the subjective estimated duration. Further evidence for scalar timing was obtained using an intraclass correlation statistic. This is the first report of consistent, reliable scalar timing in a sizable sample of house mice, thus validating the PI procedure as a valuable technique, the intraclass correlation statistic as a powerful test of the scalar property, and the C57BL/6 strain as a suitable background for behavioral investigations of genetically engineered mice modeling disorders of interval timing. PMID:19824777
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
A perturbative approach to the redshift space correlation function: beyond the Standard Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk
We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model whichmore » is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with ≤ 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpc h ≤ s ≤ 180Mpc/ h . Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dechant, Lawrence J.
Wave packet analysis provides a connection between linear small disturbance theory and subsequent nonlinear turbulent spot flow behavior. The traditional association between linear stability analysis and nonlinear wave form is developed via the method of stationary phase whereby asymptotic (simplified) mean flow solutions are used to estimate dispersion behavior and stationary phase approximation are used to invert the associated Fourier transform. The resulting process typically requires nonlinear algebraic equations inversions that can be best performed numerically, which partially mitigates the value of the approximation as compared to a more complete, e.g. DNS or linear/nonlinear adjoint methods. To obtain a simpler,more » closed-form analytical result, the complete packet solution is modeled via approximate amplitude (linear convected kinematic wave initial value problem) and local sinusoidal (wave equation) expressions. Significantly, the initial value for the kinematic wave transport expression follows from a separable variable coefficient approximation to the linearized pressure fluctuation Poisson expression. The resulting amplitude solution, while approximate in nature, nonetheless, appears to mimic many of the global features, e.g. transitional flow intermittency and pressure fluctuation magnitude behavior. A low wave number wave packet models also recover meaningful auto-correlation and low frequency spectral behaviors.« less
Influence of dynamic inflow on the helicopter vertical response
NASA Technical Reports Server (NTRS)
Chen, Robert T. N.; Hindson, William S.
1986-01-01
A study was conducted to investigate the effects of dynamic inflow on rotor-blade flapping and vertical motion of the helicopter in hover. Linearized versions of two dynamic inflow models, one developed by Carpenter and Fridovich and the other by Pitt and Peters, were incorporated in simplified rotor-body models and were compared for variations in thrust coefficient and the blade Lock number. In addition, a comparison was made between the results of the linear analysis, and the transient and frequency responses measured in flight on the CH-47B variable-stability helicopter. Results indicate that the correlations are good, considering the simplified model used. The linear analysis also shows that dynamic inflow plays a key role in destabilizing the flapping mode. The destabilized flapping mode, along with the inflow mode that the dynamic inflow introduces, results in a large initial overshoot in the vertical acceleration response to an abrupt input in the collective pitch. This overshoot becomes more pronounced as either the thrust coefficient or the blade Lock number is reduced. Compared with Carpenter's inflow model, Pitt's model tends to produce more oscillatory responses because of the less stable flapping mode predicted by it.
NASA Astrophysics Data System (ADS)
Bokhan, Denis; Trubnikov, Dmitrii N.; Perera, Ajith; Bartlett, Rodney J.
2018-04-01
An explicitly-correlated method of calculation of excited states with spin-orbit couplings, has been formulated and implemented. Developed approach utilizes left and right eigenvectors of equation-of-motion coupled-cluster model, which is based on the linearly approximated explicitly correlated coupled-cluster singles and doubles [CCSD(F12)] method. The spin-orbit interactions are introduced by using the spin-orbit mean field (SOMF) approximation of the Breit-Pauli Hamiltonian. Numerical tests for several atoms and molecules show good agreement between explicitly-correlated results and the corresponding values, calculated in complete basis set limit (CBS); the highly-accurate excitation energies can be obtained already at triple- ζ level.
Ward identities and combinatorics of rainbow tensor models
NASA Astrophysics Data System (ADS)
Itoyama, H.; Mironov, A.; Morozov, A.
2017-06-01
We discuss the notion of renormalization group (RG) completion of non-Gaussian Lagrangians and its treatment within the framework of Bogoliubov-Zimmermann theory in application to the matrix and tensor models. With the example of the simplest non-trivial RGB tensor theory (Aristotelian rainbow), we introduce a few methods, which allow one to connect calculations in the tensor models to those in the matrix models. As a byproduct, we obtain some new factorization formulas and sum rules for the Gaussian correlators in the Hermitian and complex matrix theories, square and rectangular. These sum rules describe correlators as solutions to finite linear systems, which are much simpler than the bilinear Hirota equations and the infinite Virasoro recursion. Search for such relations can be a way to solving the tensor models, where an explicit integrability is still obscure.
NASA Technical Reports Server (NTRS)
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
Connes' embedding problem and winning strategies for quantum XOR games
NASA Astrophysics Data System (ADS)
Harris, Samuel J.
2017-12-01
We consider quantum XOR games, defined in the work of Regev and Vidick [ACM Trans. Comput. Theory 7, 43 (2015)], from the perspective of unitary correlations defined in the work of Harris and Paulsen [Integr. Equations Oper. Theory 89, 125 (2017)]. We show that the winning bias of a quantum XOR game in the tensor product model (respectively, the commuting model) is equal to the norm of its associated linear functional on the unitary correlation set from the appropriate model. We show that Connes' embedding problem has a positive answer if and only if every quantum XOR game has entanglement bias equal to the commuting bias. In particular, the embedding problem is equivalent to determining whether every quantum XOR game G with a winning strategy in the commuting model also has a winning strategy in the approximate finite-dimensional model.
Pre-Flight Radiometric Model of Linear Imager on LAPAN-IPB Satellite
NASA Astrophysics Data System (ADS)
Hadi Syafrudin, A.; Salaswati, Sartika; Hasbi, Wahyudi
2018-05-01
LAPAN-IPB Satellite is Microsatellite class with mission of remote sensing experiment. This satellite carrying Multispectral Line Imager for captured of radiometric reflectance value from earth to space. Radiometric quality of image is important factor to classification object on remote sensing process. Before satellite launch in orbit or pre-flight, Line Imager have been tested by Monochromator and integrating sphere to get spectral and every pixel radiometric response characteristic. Pre-flight test data with variety setting of line imager instrument used to see correlation radiance input and digital number of images output. Output input correlation is described by the radiance conversion model with imager setting and radiometric characteristics. Modelling process from hardware level until normalize radiance formula are presented and discussed in this paper.
Sparse Modeling of Human Actions from Motion Imagery
2011-09-02
is here developed. Spatio-temporal features that char- acterize local changes in the image are rst extracted. This is followed by the learning of a...video comes from the optimal sparse linear com- bination of the learned basis vectors (action primitives) representing the actions. A low...computational cost deep-layer model learning the inter- class correlations of the data is added for increasing discriminative power. In spite of its simplicity
Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.
Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong
2018-03-01
The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Canopy reflectance modelling of semiarid vegetation
NASA Technical Reports Server (NTRS)
Franklin, Janet
1994-01-01
Three different types of remote sensing algorithms for estimating vegetation amount and other land surface biophysical parameters were tested for semiarid environments. These included statistical linear models, the Li-Strahler geometric-optical canopy model, and linear spectral mixture analysis. The two study areas were the National Science Foundation's Jornada Long Term Ecological Research site near Las Cruces, NM, in the northern Chihuahuan desert, and the HAPEX-Sahel site near Niamey, Niger, in West Africa, comprising semiarid rangeland and subtropical crop land. The statistical approach (simple and multiple regression) resulted in high correlations between SPOT satellite spectral reflectance and shrub and grass cover, although these correlations varied with the spatial scale of aggregation of the measurements. The Li-Strahler model produced estimated of shrub size and density for both study sites with large standard errors. In the Jornada, the estimates were accurate enough to be useful for characterizing structural differences among three shrub strata. In Niger, the range of shrub cover and size in short-fallow shrublands is so low that the necessity of spatially distributed estimation of shrub size and density is questionable. Spectral mixture analysis of multiscale, multitemporal, multispectral radiometer data and imagery for Niger showed a positive relationship between fractions of spectral endmembers and surface parameters of interest including soil cover, vegetation cover, and leaf area index.
Apostolopoulos, K N; Deligianni, D D
2008-02-01
An experimental model which can simulate physical changes that occur during aging was developed in order to evaluate the effects of change of mineral content and microstructure on ultrasonic properties of bovine cancellous bone. Timed immersion in hydrochloric acid was used to selectively alter the mineral content. Scanning electron microscopy and histological staining of the acid-treated trabeculae demonstrated a heterogeneous structure consisting of a mineralized core and a demineralized layer. The presence of organic matrix contributed very little to normalized broadband ultrasound attenuation (nBUA) and speed of sound. All three ultrasonic parameters, speed of sound, nBUA and backscatter coefficient, were sensitive to changes in apparent density of bovine cancellous bone. A two-component model utilizing a combination of two autocorrelation functions (a densely populated model and a spherical distribution) was used to approximate the backscatter coefficient. The predicted attenuation due to scattering constituted a significant part of the measured total attenuation (due to both scattering and absorption mechanisms) for bovine cancellous bone. Linear regression, performed between trabecular thickness values and estimated from the model correlation lengths, showed significant linear correlation, with R(2)=0.81 before and R(2)=0.80 after demineralization. The accuracy of estimation was found to increase with trabecular thickness.
A Novel Two-Step Hierarchial Quantitative Structure-Activity ...
Background: Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public–private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. Methods and results: A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC50) and in vivo rodent median lethal dose (LD50) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET ; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments) . The application of conventional quantitative structure–activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD50 values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC50 and LD50. However, a linear IC50 versus LD50 correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC50 and LD50 values: One group comprises compounds with linear IC50 versus LD50 relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models t
Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun
2014-01-01
This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation. PMID:24757433
Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun
2014-01-01
This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation.
Noninvasive and fast measurement of blood glucose in vivo by near infrared (NIR) spectroscopy
NASA Astrophysics Data System (ADS)
Jintao, Xue; Liming, Ye; Yufei, Liu; Chunyan, Li; Han, Chen
2017-05-01
This research was to develop a method for noninvasive and fast blood glucose assay in vivo. Near-infrared (NIR) spectroscopy, a more promising technique compared to other methods, was investigated in rats with diabetes and normal rats. Calibration models are generated by two different multivariate strategies: partial least squares (PLS) as linear regression method and artificial neural networks (ANN) as non-linear regression method. The PLS model was optimized individually by considering spectral range, spectral pretreatment methods and number of model factors, while the ANN model was studied individually by selecting spectral pretreatment methods, parameters of network topology, number of hidden neurons, and times of epoch. The results of the validation showed the two models were robust, accurate and repeatable. Compared to the ANN model, the performance of the PLS model was much better, with lower root mean square error of validation (RMSEP) of 0.419 and higher correlation coefficients (R) of 96.22%.
School system evaluation by value added analysis under endogeneity.
Manzi, Jorge; San Martín, Ernesto; Van Bellegem, Sébastien
2014-01-01
Value added is a common tool in educational research on effectiveness. It is often modeled as a (prediction of a) random effect in a specific hierarchical linear model. This paper shows that this modeling strategy is not valid when endogeneity is present. Endogeneity stems, for instance, from a correlation between the random effect in the hierarchical model and some of its covariates. This paper shows that this phenomenon is far from exceptional and can even be a generic problem when the covariates contain the prior score attainments, a typical situation in value added modeling. Starting from a general, model-free definition of value added, the paper derives an explicit expression of the value added in an endogeneous hierarchical linear Gaussian model. Inference on value added is proposed using an instrumental variable approach. The impact of endogeneity on the value added and the estimated value added is calculated accurately. This is also illustrated on a large data set of individual scores of about 200,000 students in Chile.
NASA Astrophysics Data System (ADS)
Tautz-Weinert, J.; Watson, S. J.
2016-09-01
Effective condition monitoring techniques for wind turbines are needed to improve maintenance processes and reduce operational costs. Normal behaviour modelling of temperatures with information from other sensors can help to detect wear processes in drive trains. In a case study, modelling of bearing and generator temperatures is investigated with operational data from the SCADA systems of more than 100 turbines. The focus is here on automated training and testing on a farm level to enable an on-line system, which will detect failures without human interpretation. Modelling based on linear combinations, artificial neural networks, adaptive neuro-fuzzy inference systems, support vector machines and Gaussian process regression is compared. The selection of suitable modelling inputs is discussed with cross-correlation analyses and a sensitivity study, which reveals that the investigated modelling techniques react in different ways to an increased number of inputs. The case study highlights advantages of modelling with linear combinations and artificial neural networks in a feedforward configuration.
Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng
2013-05-01
Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.
Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.
Duarte, Belmiro P M; Wong, Weng Kee
2015-08-01
This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.
Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach
Duarte, Belmiro P. M.; Wong, Weng Kee
2014-01-01
Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159
2014-01-01
Background Protein sites evolve at different rates due to functional and biophysical constraints. It is usually considered that the main structural determinant of a site’s rate of evolution is its Relative Solvent Accessibility (RSA). However, a recent comparative study has shown that the main structural determinant is the site’s Local Packing Density (LPD). LPD is related with dynamical flexibility, which has also been shown to correlate with sequence variability. Our purpose is to investigate the mechanism that connects a site’s LPD with its rate of evolution. Results We consider two models: an empirical Flexibility Model and a mechanistic Stress Model. The Flexibility Model postulates a linear increase of site-specific rate of evolution with dynamical flexibility. The Stress Model, introduced here, models mutations as random perturbations of the protein’s potential energy landscape, for which we use simple Elastic Network Models (ENMs). To account for natural selection we assume a single active conformation and use basic statistical physics to derive a linear relationship between site-specific evolutionary rates and the local stress of the mutant’s active conformation. We compare both models on a large and diverse dataset of enzymes. In a protein-by-protein study we found that the Stress Model outperforms the Flexibility Model for most proteins. Pooling all proteins together we show that the Stress Model is strongly supported by the total weight of evidence. Moreover, it accounts for the observed nonlinear dependence of sequence variability on flexibility. Finally, when mutational stress is controlled for, there is very little remaining correlation between sequence variability and dynamical flexibility. Conclusions We developed a mechanistic Stress Model of evolution according to which the rate of evolution of a site is predicted to depend linearly on the local mutational stress of the active conformation. Such local stress is proportional to LPD, so that this model explains the relationship between LPD and evolutionary rate. Moreover, the model also accounts for the nonlinear dependence between evolutionary rate and dynamical flexibility. PMID:24716445
Experimental linear-optics simulation of ground-state of an Ising spin chain.
Xue, Peng; Zhan, Xian; Bian, Zhihao
2017-05-19
We experimentally demonstrate a photonic quantum simulator: by using a two-spin Ising chain (an isolated dimer) as an example, we encode the wavefunction of the ground state with a pair of entangled photons. The effect of magnetic fields, leading to a critical modification of the correlation between two spins, can be simulated by just local operations. With the ratio of simulated magnetic fields and coupling strength increasing, the ground state of the system changes from a product state to an entangled state and back to another product state. The simulated ground states can be distinguished and the transformations between them can be observed by measuring correlations between photons. This simulation of the Ising model with linear quantum optics opens the door to the future studies which connect quantum information and condensed matter physics.
Correlation between aqueous flare and residual visual field area in retinitis pigmentosa.
Nishiguchi, Koji M; Yokoyama, Yu; Kunikata, Hiroshi; Abe, Toshiaki; Nakazawa, Toru
2018-06-01
To investigate the relationship between aqueous flare, visual function and macular structures in retinitis pigmentosa (RP). Clinical data from 123 patients with RP (227 eyes), 35 patients with macular dystrophy (68 eyes) and 148 controls (148 eyes) were analysed. The differences in aqueous flare between clinical entities and the correlation between aqueous flare (measured with a laser flare cell meter) versus visual acuity, visual field area (Goldmann perimetry) and macular thickness (optical coherence tomography) in patients with RP were determined. Influence of selected clinical data on flare was assessed using linear mixed-effects model. Aqueous flare was higher in patients with RP than patients with macular dystrophy or controls (p=7.49×E-13). Aqueous flare was correlated with visual field area (R=-0.379, p=3.72×E-9), but not with visual acuity (R=0.083, p=0.215). Macular thickness (R=0.234, p=3.74×E-4), but not foveal thickness (R=0.122, p=0.067), was positively correlated with flare. Flare was not affected by the presence of macular complications. All these associations were maintained when the right and the left eyes were assessed separately. Analysis by linear mixed-effects model revealed that age (p=8.58×E-5), visual field area (p=8.01×E-7) and average macular thickness (p=0.037) were correlated with flare. Aqueous flare and visual field area were correlated in patients with RP. Aqueous flare may reflect the degree of overall retinal degeneration more closely than the local foveal impairment. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Possible association between obesity and periodontitis in patients with Down syndrome.
Culebras-Atienza, E; Silvestre, F-J; Silvestre-Rangil, J
2018-05-01
The present study was carried out to evaluate the possible association between obesity and periodontitis in patients with DS, and to explore which measure of obesity is most closely correlated to periodontitis. A prospective observational study was made to determine whether obesity is related to periodontal disease in patients with DS. The anthropometric variables were body height and weight, which were used to calculate BMI and stratify the patients into three categories: < 25(normal weight), 25-29.9 (overweight) and ≥ 30.0 kg/m2 (obese). Waist circumference and hip circumference in turn was recorded as the greatest circumference at the level of the buttocks, while the waist/hip ratio (WHR) was calculated. Periodontal evaluation was made of all teeth recording the plaque index (PI), pocket depth (PD), clinical attachment level (CAL) and the gingival index. We generated a multivariate linear regression model to examine the relationship between PD and the frequency of tooth brushing, gender, BMI, WHI, WHR, age and PI. Significant positive correlations were observed among the anthropometric parameters BMI, WHR, WHI and among the periodontal parameters PI, PD, CAL and GI. The only positive correlation between the anthropometric and periodontal parameters corresponded to WHR. Upon closer examination, the distribution of WHR was seen to differ according to gender. Among the women, the correlation between WHR and the periodontal variables decreased to nonsignificant levels. In contrast, among the males the correlation remained significant and even increased. In a multivariate linear regression model, the coefficients relating PD to PI, WHR and age were positive and significant in all cases. Our results suggest that there may indeed be an association between obesity and periodontitis in male patients with DS. Also, we found a clear correlation with WHR, which was considered to be the ideal adiposity indicator in this context.
Wolf, Lisa
2013-02-01
To explore the relationship between multiple variables within a model of critical thinking and moral reasoning. A quantitative descriptive correlational design using a purposive sample of 200 emergency nurses. Measured variables were accuracy in clinical decision-making, moral reasoning, perceived care environment, and demographics. Analysis was by bivariate correlation using Pearson's product-moment correlation coefficients, chi square and multiple linear regression analysis. The elements as identified in the integrated ethically-driven environmental model of clinical decision-making (IEDEM-CD) corrected depict moral reasoning and environment of care as factors significantly affecting accuracy in decision-making. The integrated, ethically driven environmental model of clinical decision making is a framework useful for predicting clinical decision making accuracy for emergency nurses in practice, with further implications in education, research and policy. A diagnostic and therapeutic framework for identifying and remediating individual and environmental challenges to accurate clinical decision making. © 2012, The Author. International Journal of Nursing Knowledge © 2012, NANDA International.
Multivariate Longitudinal Analysis with Bivariate Correlation Test.
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.
NASA Astrophysics Data System (ADS)
Papalexiou, Simon Michael
2018-05-01
Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.
Precise Analysis of Microstructural Effects on Mechanical Properties of Cast ADC12 Aluminum Alloy
NASA Astrophysics Data System (ADS)
Okayasu, Mitsuhiro; Takeuchi, Shuhei; Yamamoto, Masaki; Ohfuji, Hiroaki; Ochi, Toshihiro
2015-04-01
The effects of microstructural characteristics (secondary dendrite arm spacing, SDAS) and Si- and Fe-based eutectic structures on the mechanical properties and failure behavior of an Al-Si-Cu alloy are investigated. Cast Al alloy samples are produced using a special continuous-casting technique with which it is easy to control both the sizes of microstructures and the direction of crystal orientation. Dendrite cells appear to grow in the casting direction. There are linear correlations between SDAS and tensile properties (ultimate tensile strength σ UTS, 0.2 pct proof strength σ 0.2, and fracture strain ɛ f). These linear correlations, however, break down, especially for σ UTS vs SDAS and ɛ f vs SDAS, as the eutectic structures become more than 3 μm in diameter, when the strength and ductility ( σ UTS and ɛ f) decrease significantly. For eutectic structures larger than 3 μm, failure is dominated by the brittle eutectic phases, for which SDAS is no longer strongly correlated with σ UTS and ɛ f. In contrast, a linear correlation is obtained between σ 0.2 and SDAS, even for eutectic structures larger than 3 μm, and the eutectic structure does not have a strong effect on yield behavior. This is because failure in the eutectic phases occurs just before final fracture. In situ failure observation during tensile testing is performed using microstructural and lattice characteristics. From the experimental results obtained, models of failure during tensile loading are proposed.
Complex messages regarding a thin ideal appearing in teenage girls' magazines from 1956 to 2005.
Luff, Gina M; Gray, James J
2009-03-01
Seventeen and YM were assessed from 1956 through 2005 (n=312) to examine changes in the messages about thinness sent to teenage women. Trends were analyzed through an investigation of written, internal content focused on dieting, exercise, or both, while cover models were examined to explore fluctuations in body size. Pearson's Product correlations and weighted-least squares linear regression models were used to demonstrate changes over time. The frequency of written content related to exercise and combined plans increased in Seventeen, while a curvilinear relationship between time and content relating to dieting appeared. YM showed a linear increase in content related to dieting, exercise, and combined plans. Average cover model body size increased over time in YM while demonstrating no significant changes in Seventeen. Overall, more written messages about dieting and exercise appeared in teen's magazines in 2005 than before while the average cover model body size increased.
The Accuracy and Reproducibility of Linear Measurements Made on CBCT-derived Digital Models.
Maroua, Ahmad L; Ajaj, Mowaffak; Hajeer, Mohammad Y
2016-04-01
To evaluate the accuracy and reproducibility of linear measurements made on cone-beam computed tomography (CBCT)-derived digital models. A total of 25 patients (44% female, 18.7 ± 4 years) who had CBCT images for diagnostic purposes were included. Plaster models were obtained and digital models were extracted from CBCT scans. Seven linear measurements from predetermined landmarks were measured and analyzed on plaster models and the corresponding digital models. The measurements included arch length and width at different sites. Paired t test and Bland-Altman analysis were used to evaluate the accuracy of measurements on digital models compared to the plaster models. Also, intraclass correlation coefficients (ICCs) were used to evaluate the reproducibility of the measurements in order to assess the intraobserver reliability. The statistical analysis showed significant differences on 5 out of 14 variables, and the mean differences ranged from -0.48 to 0.51 mm. The Bland-Altman analysis revealed that the mean difference between variables was (0.14 ± 0.56) and (0.05 ± 0.96) mm and limits of agreement between the two methods ranged from -1.2 to 0.96 and from -1.8 to 1.9 mm in the maxilla and the mandible, respectively. The intraobserver reliability values were determined for all 14 variables of two types of models separately. The mean ICC value for the plaster models was 0.984 (0.924-0.999), while it was 0.946 for the CBCT models (range from 0.850 to 0.985). Linear measurements obtained from the CBCT-derived models appeared to have a high level of accuracy and reproducibility.
Boguslawski, Katharina; Tecmer, Paweł
2017-12-12
Wave functions restricted to electron-pair states are promising models to describe static/nondynamic electron correlation effects encountered, for instance, in bond-dissociation processes and transition-metal and actinide chemistry. To reach spectroscopic accuracy, however, the missing dynamic electron correlation effects that cannot be described by electron-pair states need to be included a posteriori. In this Article, we extend the previously presented perturbation theory models with an Antisymmetric Product of 1-reference orbital Geminal (AP1roG) reference function that allows us to describe both static/nondynamic and dynamic electron correlation effects. Specifically, our perturbation theory models combine a diagonal and off-diagonal zero-order Hamiltonian, a single-reference and multireference dual state, and different excitation operators used to construct the projection manifold. We benchmark all proposed models as well as an a posteriori Linearized Coupled Cluster correction on top of AP1roG against CR-CC(2,3) reference data for reaction energies of several closed-shell molecules that are extrapolated to the basis set limit. Moreover, we test the performance of our new methods for multiple bond breaking processes in the homonuclear N 2 , C 2 , and F 2 dimers as well as the heteronuclear BN, CO, and CN + dimers against MRCI-SD, MRCI-SD+Q, and CR-CC(2,3) reference data. Our numerical results indicate that the best performance is obtained from a Linearized Coupled Cluster correction as well as second-order perturbation theory corrections employing a diagonal and off-diagonal zero-order Hamiltonian and a single-determinant dual state. These dynamic corrections on top of AP1roG provide substantial improvements for binding energies and spectroscopic properties obtained with the AP1roG approach, while allowing us to approach chemical accuracy for reaction energies involving closed-shell species.
Groen, Harald C.; Niessen, Wiro J.; Bernsen, Monique R.; de Jong, Marion; Veenland, Jifke F.
2013-01-01
Although efficient delivery and distribution of treatment agents over the whole tumor is essential for successful tumor treatment, the distribution of most of these agents cannot be visualized. However, with single-photon emission computed tomography (SPECT), both delivery and uptake of radiolabeled peptides can be visualized in a neuroendocrine tumor model overexpressing somatostatin receptors. A heterogeneous peptide uptake is often observed in these tumors. We hypothesized that peptide distribution in the tumor is spatially related to tumor perfusion, vessel density and permeability, as imaged and quantified by DCE-MRI in a neuroendocrine tumor model. Four subcutaneous CA20948 tumor-bearing Lewis rats were injected with the somatostatin-analog 111In-DTPA-Octreotide (50 MBq). SPECT-CT and MRI scans were acquired and MRI was spatially registered to SPECT-CT. DCE-MRI was analyzed using semi-quantitative and quantitative methods. Correlation between SPECT and DCE-MRI was investigated with 1) Spearman’s rank correlation coefficient; 2) SPECT uptake values grouped into deciles with corresponding median DCE-MRI parametric values and vice versa; and 3) linear regression analysis for median parameter values in combined datasets. In all tumors, areas with low peptide uptake correlated with low perfusion/density/ /permeability for all DCE-MRI-derived parameters. Combining all datasets, highest linear regression was found between peptide uptake and semi-quantitative parameters (R2>0.7). The average correlation coefficient between SPECT and DCE-MRI-derived parameters ranged from 0.52-0.56 (p<0.05) for parameters primarily associated with exchange between blood and extracellular extravascular space. For these parameters a linear relation with peptide uptake was observed. In conclusion, the ‘exchange-related’ DCE-MRI-derived parameters seemed to predict peptide uptake better than the ‘contrast amount- related’ parameters. Consequently, fast and efficient diffusion through the vessel wall into tissue is an important factor for peptide delivery. DCE-MRI helps to elucidate the relation between vascular characteristics, peptide delivery and treatment efficacy, and may form a basis to predict targeting efficiency. PMID:24116203
The relation between anxiety and BMI - is it all in our curves?
Haghighi, Mohammad; Jahangard, Leila; Ahmadpanah, Mohammad; Bajoghli, Hafez; Holsboer-Trachsler, Edith; Brand, Serge
2016-01-30
The relation between anxiety and excessive weight is unclear. The aims of the present study were three-fold: First, we examined the association between anxiety and Body Mass Index (BMI). Second, we examined this association separately for female and male participants. Next, we examined both linear and non-linear associations between anxiety and BMI. The BMI was assessed of 92 patients (mean age: M=27.52; 57% females) suffering from anxiety disorders. Patients completed the Beck Anxiety Inventory. Both linear and non-linear correlations were computed for the sample as a whole and separately by gender. No gender differences were observed in anxiety scores or BMI. No linear correlation between anxiety scores and BMI was observed. In contrast, a non-linear correlation showed an inverted U-shaped association, with lower anxiety scores both for lower and very high BMI indices, and higher anxiety scores for medium to high BMI indices. Separate computations revealed no differences between males and females. The pattern of results suggests that the association between BMI and anxiety is complex and more accurately captured with non-linear correlations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bressan, José Divo; Liewald, Mathias; Drotleff, Klaus
2017-10-01
Forming limit strain curves of conventional aluminium alloy AA6014 sheets after loading with non-linear strain paths are presented and compared with D-Bressan macroscopic model of sheet metal rupture by critical shear stress criterion. AA6014 exhibits good formability at room temperature and, thus, is mainly employed in car body external parts by manufacturing at room temperature. According to Weber et al., experimental bi-linear strain paths were carried out in specimens with 1mm thickness by pre-stretching in uniaxial and biaxial directions up to 5%, 10% and 20% strain levels before performing Nakajima testing experiments to obtain the forming limit strain curves, FLCs. In addition, FLCs of AA6014 were predicted by employing D-Bressan critical shear stress criterion for bi-linear strain path and comparisons with the experimental FLCs were analyzed and discussed. In order to obtain the material coefficients of plastic anisotropy, strain and strain rate hardening behavior and calibrate the D-Bressan model, tensile tests, two different strain rate on specimens cut at 0°, 45° and 90° to the rolling direction and also bulge test were carried out at room temperature. The correlation of experimental bi-linear strain path FLCs is reasonably good with the predicted limit strains from D-Bressan model, assuming equivalent pre-strain calculated by Hill 1979 yield criterion.
Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G
2009-09-01
The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.
Stability and Control CFD Investigations of a Generic 53 Degree Swept UCAV Configuration
NASA Technical Reports Server (NTRS)
Frink, Neal T.
2014-01-01
NATO STO Task Group AVT-201 on "Extended Assessment of Reliable Stability & Control Prediction Methods for NATO Air Vehicles" is studying various computational approaches to predict stability and control parameters for aircraft undergoing non-linear flight conditions. This paper contributes an assessment through correlations with wind tunnel data for the state of aerodynamic predictive capability of time-accurate RANS methodology on the group's focus configuration, a 53deg swept and twisted lambda wing UCAV, undergoing a variety of roll, pitch, and yaw motions. The vehicle aerodynamics is dominated by the complex non-linear physics of round leading-edge vortex flow separation. Correlations with experimental data are made for static longitudinal/lateral sweeps, and at varying frequencies of prescribed roll/pitch/yaw sinusoidal motion for the vehicle operating with and without control surfaces. The data and the derived understanding should prove useful to the AVT-201 team and other researchers who are developing techniques for augmenting flight simulation models from low-speed CFD predictions of aircraft traversing non-linear regions of a flight envelope.
Linear modeling of turbulent skin-friction reduction due to spanwise wall motion
NASA Astrophysics Data System (ADS)
Duque-Daza, Carlos; Baig, Mirza; Lockerby, Duncan; Chernyshenko, Sergei; Davies, Christopher; University of Warwick Team; Imperial College Team; Cardiff University Team
2012-11-01
We present a study on the effect of streamwise-travelling waves of spanwise wall velocity on the growth of near-wall turbulent streaks using a linearized formulation of the Navier-Stokes equations. The changes in streak amplification due to the travelling waves induced by the wall velocity are compared to published results of direct numerical simulation (DNS) predictions of the turbulent skin-friction reduction over a range of parameters; a clear correlation between these two sets of results is observed. Additional linearized simulations but at a much higher Reynolds numbers, more relevant to aerospace applications, produce results that show no marked differences to those obtained at low Reynolds number. It is also observed that a close correlation exists between DNS data of drag reduction and a very simple characteristic of the ``generalized'' Stokes layer generated by the streamwise-travelling waves. Carlos.Duque-Daza@warwick.ac.uk - School of Engineering, University of Warwick, Coventry CV4 7AL, UK caduqued@unal.edu.co - Department of Mechanical and Mechatronics Engineering, Universidad Nacional de Colombia.
Nikodelis, Thomas; Moscha, Dimitra; Metaxiotis, Dimitris; Kollias, Iraklis
2011-08-01
To investigate what sampling frequency is adequate for gait, the correlation of spatiotemporal parameters and the kinematic differences, between normal and CP spastic gait, for three sampling frequencies (100 Hz, 50 Hz, 25 Hz) were assessed. Spatiotemporal, angular, and linear displacement variables in the sagittal plane along with their 1st and 2nd derivatives were analyzed. Spatiotemporal stride parameters were highly correlated among the three sampling frequencies. The statistical model (2 × 3 ANOVA) gave no interactions between the factors group and frequency, indicating that group differences were invariant of sampling frequency. Lower frequencies led to smoother curves for all the variables, with a loss of information though, especially for the 2nd derivatives, having a homologous effect as the one of oversmoothing. It is proposed that in the circumstance that only spatiotemporal stride parameters, as well as angular and linear displacements are to be used, in gait reports, then commercial video camera speeds (25/30 Hz, 50/60 Hz when deinterlaced) can be considered as a low-cost solution to produce acceptable results.
Estimating energy expenditure from heart rate in older adults: a case for calibration.
Schrack, Jennifer A; Zipunnikov, Vadim; Goldsmith, Jeff; Bandeen-Roche, Karen; Crainiceanu, Ciprian M; Ferrucci, Luigi
2014-01-01
Accurate measurement of free-living energy expenditure is vital to understanding changes in energy metabolism with aging. The efficacy of heart rate as a surrogate for energy expenditure is rooted in the assumption of a linear function between heart rate and energy expenditure, but its validity and reliability in older adults remains unclear. To assess the validity and reliability of the linear function between heart rate and energy expenditure in older adults using different levels of calibration. Heart rate and energy expenditure were assessed across five levels of exertion in 290 adults participating in the Baltimore Longitudinal Study of Aging. Correlation and random effects regression analyses assessed the linearity of the relationship between heart rate and energy expenditure and cross-validation models assessed predictive performance. Heart rate and energy expenditure were highly correlated (r=0.98) and linear regardless of age or sex. Intra-person variability was low but inter-person variability was high, with substantial heterogeneity of the random intercept (s.d. =0.372) despite similar slopes. Cross-validation models indicated individual calibration data substantially improves accuracy predictions of energy expenditure from heart rate, reducing the potential for considerable measurement bias. Although using five calibration measures provided the greatest reduction in the standard deviation of prediction errors (1.08 kcals/min), substantial improvement was also noted with two (0.75 kcals/min). These findings indicate standard regression equations may be used to make population-level inferences when estimating energy expenditure from heart rate in older adults but caution should be exercised when making inferences at the individual level without proper calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, R.
Waterflooding is the most commonly used secondary oil recovery technique. One of the requirements for understanding waterflood performance is a good knowledge of the basic properties of the reservoir rocks. This study is aimed at correlating rock-pore characteristics to oil recovery from various reservoir rock types and incorporating these properties into empirical models for Predicting oil recovery. For that reason, this report deals with the analyses and interpretation of experimental data collected from core floods and correlated against measurements of absolute permeability, porosity. wettability index, mercury porosimetry properties and irreducible water saturation. The results of the radial-core the radial-core andmore » linear-core flow investigations and the other associated experimental analyses are presented and incorporated into empirical models to improve the predictions of oil recovery resulting from waterflooding, for sandstone and limestone reservoirs. For the radial-core case, the standardized regression model selected, based on a subset of the variables, predicted oil recovery by waterflooding with a standard deviation of 7%. For the linear-core case, separate models are developed using common, uncommon and combination of both types of rock properties. It was observed that residual oil saturation and oil recovery are better predicted with the inclusion of both common and uncommon rock/fluid properties into the predictive models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, R. Derek; Gunther, Jacob H.; Moon, Todd K.
In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less
West, R. Derek; Gunther, Jacob H.; Moon, Todd K.
2016-12-01
In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less
We use a simple nitrogen budget model to analyze concentrations of total nitrogen (TN) in estuaries for which both nitrogen inputs and water residence time are correlated with freshwater inflow rates. While the nitrogen concentration of an estuary varies linearly with TN loading ...
ERIC Educational Resources Information Center
Zeedyk, Sasha M.; Blacher, Jan
2017-01-01
This study identified trajectories of depressive symptoms among mothers of children with or without intellectual disability longitudinally across eight time points. Results of fitting a linear growth model to the data from child ages 3-9 indicated that child behavior problems, negative financial impact, and low dispositional optimism all…
A group contribution method has been developed to correlate the acute toxicity (96 h LC50) to the fathead minnow (Pimephales promelas) for 379 organic chemicals. Multilinear regression and computational neural networks (CNNs) were used for model building. The multilinear linear m...
M.A. Mohamed; H.C. Coppel; J.D. Podgwaite; W.D. Rollinson
1983-01-01
Disease-free larvae of Neodiprion sertifer (Geoffroy) treated with its nucleopolyhedrosis virus in the field and under laboratory conditions showed a high correlation between virus accumulation and body weight. Simple linear regression models were found to fit viral accumulation versus body weight under either circumstance.
ERIC Educational Resources Information Center
Musekamp, Frank; Pearce, Jacob
2016-01-01
The goal of this paper is to examine the relationship of student motivation and achievement in low-stakes assessment contexts. Using Pearson product-moment correlations and hierarchical linear regression modelling to analyse data on 794 tertiary students who undertook a low-stakes engineering mechanics assessment (along with the questionnaire of…
NASA Astrophysics Data System (ADS)
Fukayama, Osamu; Taniguchi, Noriyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
We are developing a brain-machine interface (BMI) called “RatCar," a small vehicle controlled by the neural signals of a rat's brain. An unconfined adult rat with a set of bundled neural electrodes in the brain rides on the vehicle. Each bundle consists of four tungsten wires isolated with parylene polymer. These bundles were implanted in the primary motor and premotor cortices in both hemispheres of the brain. In this paper, methods and results for estimating locomotion speed and directional changes are described. Neural signals were recorded as the rat moved in a straight line and as it changed direction in a curve. Spike-like waveforms were then detected and classified into several clusters to calculate a firing rate for each neuron. The actual locomotion velocity and directional changes of the rat were recorded concurrently. Finally, the locomotion states were correlated with the neural firing rates using a simple linear model. As a result, the abstract estimation of the locomotion velocity and directional changes were achieved.
Nonparametric regression applied to quantitative structure-activity relationships
Constans; Hirst
2000-03-01
Several nonparametric regressors have been applied to modeling quantitative structure-activity relationship (QSAR) data. The simplest regressor, the Nadaraya-Watson, was assessed in a genuine multivariate setting. Other regressors, the local linear and the shifted Nadaraya-Watson, were implemented within additive models--a computationally more expedient approach, better suited for low-density designs. Performances were benchmarked against the nonlinear method of smoothing splines. A linear reference point was provided by multilinear regression (MLR). Variable selection was explored using systematic combinations of different variables and combinations of principal components. For the data set examined, 47 inhibitors of dopamine beta-hydroxylase, the additive nonparametric regressors have greater predictive accuracy (as measured by the mean absolute error of the predictions or the Pearson correlation in cross-validation trails) than MLR. The use of principal components did not improve the performance of the nonparametric regressors over use of the original descriptors, since the original descriptors are not strongly correlated. It remains to be seen if the nonparametric regressors can be successfully coupled with better variable selection and dimensionality reduction in the context of high-dimensional QSARs.
NASA Technical Reports Server (NTRS)
Gaonkar, G.
1987-01-01
For flap lag stability of isolated rotors, experimental and analytical investigations were conducted in hover and forward flight on the adequacy of a linear quasisteady aerodynamics theory with dynamic flow. Forward flight effects on lag regressing mode were emphasized. A soft inplane hingeless rotor with three blades was tested at advance ratios as high as 0.55 and at shaft angles as high as 20 deg. The 1.62 m model rotor was untrimmed with an essentially unrestricted tilt of the tip path plane. In combination with lag natural frequencies, collective pitch settings and flap lag coupling parameters, the data base comprises nearly 1200 test points (damping and frequency) in forward flight and 200 test points in hover. By computerized symbolic manipulation, a linear model was developed in substall to predict stability margins with mode identification. To help explain the correlation between theory and data it also predicted substall and stall regions of the rotor disk from equilibrium values. The correlation showed both the strengths and weaknesses of the theory in substall ((angle of attack) equal to or less than 12 deg).
Parametrization of an Orbital-Based Linear-Scaling Quantum Force Field for Noncovalent Interactions
2015-01-01
We parametrize a linear-scaling quantum mechanical force field called mDC for the accurate reproduction of nonbonded interactions. We provide a new benchmark database of accurate ab initio interactions between sulfur-containing molecules. A variety of nonbond databases are used to compare the new mDC method with other semiempirical, molecular mechanical, ab initio, and combined semiempirical quantum mechanical/molecular mechanical methods. It is shown that the molecular mechanical force field significantly and consistently reproduces the benchmark results with greater accuracy than the semiempirical models and our mDC model produces errors twice as small as the molecular mechanical force field. The comparisons between the methods are extended to the docking of drug candidates to the Cyclin-Dependent Kinase 2 protein receptor. We correlate the protein–ligand binding energies to their experimental inhibition constants and find that the mDC produces the best correlation. Condensed phase simulation of mDC water is performed and shown to produce O–O radial distribution functions similar to TIP4P-EW. PMID:24803856
Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming
2014-01-01
The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.
Gottschalk, Maren; Sieme, Harald; Martinsson, Gunilla; Distl, Ottmar
2017-02-01
A high quality of stallion semen is of particular importance for maximum reproductive efficiency. In the present study, we estimated the relationships among estimated breeding values (EBVs) of semen traits and EBVs for the paternal component of the pregnancy rate per estrus cycle (EBV-PAT) for 100 German Warmblood stallions using correlation and general linear model analyses. The most highly correlated sperm quality trait was total number of progressively motile sperm (r = 0.36). EBV-PAT was considered in three classes with stallions 1 SD below (<80), around (80-120), and above (>120) the population mean of 100. The general linear model analysis showed significant effects for EBVs of all semen traits. EBVs of sperm quality traits greater than 100 to 110 were indicative for EBV-PAT greater than 120. Recommendations for breeding soundness examinations on the basis of the assessments of sperm quality traits and estimation of breeding values seem to be an option to support breeders to improve stallion fertility in the present and future stallion generation. Copyright © 2016 Elsevier Inc. All rights reserved.
Independent data validation of an in vitro method for ...
In vitro bioaccessibility assays (IVBA) estimate arsenic (As) relative bioavailability (RBA) in contaminated soils to improve the accuracy of site-specific human exposure assessments and risk calculations. For an IVBA assay to gain acceptance for use in risk assessment, it must be shown to reliably predict in vivo RBA that is determined in an established animal model. Previous studies correlating soil As IVBA with RBA have been limited by the use of few soil types as the source of As. Furthermore, the predictive value of As IVBA assays has not been validated using an independent set of As-contaminated soils. Therefore, the current study was undertaken to develop a robust linear model to predict As RBA in mice using an IVBA assay and to independently validate the predictive capability of this assay using a unique set of As-contaminated soils. Thirty-six As-contaminated soils varying in soil type, As contaminant source, and As concentration were included in this study, with 27 soils used for initial model development and nine soils used for independent model validation. The initial model reliably predicted As RBA values in the independent data set, with a mean As RBA prediction error of 5.3% (range 2.4 to 8.4%). Following validation, all 36 soils were used for final model development, resulting in a linear model with the equation: RBA = 0.59 * IVBA + 9.8 and R2 of 0.78. The in vivo-in vitro correlation and independent data validation presented here provide
The impact of surface area, volume, curvature, and Lennard-Jones potential to solvation modeling.
Nguyen, Duc D; Wei, Guo-Wei
2017-01-05
This article explores the impact of surface area, volume, curvature, and Lennard-Jones (LJ) potential on solvation free energy predictions. Rigidity surfaces are utilized to generate robust analytical expressions for maximum, minimum, mean, and Gaussian curvatures of solvent-solute interfaces, and define a generalized Poisson-Boltzmann (GPB) equation with a smooth dielectric profile. Extensive correlation analysis is performed to examine the linear dependence of surface area, surface enclosed volume, maximum curvature, minimum curvature, mean curvature, and Gaussian curvature for solvation modeling. It is found that surface area and surfaces enclosed volumes are highly correlated to each other's, and poorly correlated to various curvatures for six test sets of molecules. Different curvatures are weakly correlated to each other for six test sets of molecules, but are strongly correlated to each other within each test set of molecules. Based on correlation analysis, we construct twenty six nontrivial nonpolar solvation models. Our numerical results reveal that the LJ potential plays a vital role in nonpolar solvation modeling, especially for molecules involving strong van der Waals interactions. It is found that curvatures are at least as important as surface area or surface enclosed volume in nonpolar solvation modeling. In conjugation with the GPB model, various curvature-based nonpolar solvation models are shown to offer some of the best solvation free energy predictions for a wide range of test sets. For example, root mean square errors from a model constituting surface area, volume, mean curvature, and LJ potential are less than 0.42 kcal/mol for all test sets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Davis, D. D., Jr.; Krishnamurthy, T.; Stroud, W. J.; McCleary, S. L.
1991-05-01
State-of-the-art nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain-displacement relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis.
NASA Technical Reports Server (NTRS)
Davis, D. D., Jr.; Krishnamurthy, T.; Stroud, W. J.; Mccleary, S. L.
1991-01-01
State-of-the-art nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain-displacement relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis.
Modeling the microstructurally dependent mechanical properties of poly(ester-urethane-urea)s.
Warren, P Daniel; Sycks, Dalton G; McGrath, Dominic V; Vande Geest, Jonathan P
2013-12-01
Poly(ester-urethane-urea) (PEUU) is one of many synthetic biodegradable elastomers under scrutiny for biomedical and soft tissue applications. The goal of this study was to investigate the effect of the experimental parameters on mechanical properties of PEUUs following exposure to different degrading environments, similar to that of the human body, using linear regression, producing one predictive model. The model utilizes two independent variables of poly(caprolactone) (PCL) type and copolymer crystallinity to predict the dependent variable of maximum tangential modulus (MTM). Results indicate that comparisons between PCLs at different degradation states are statistically different (p < 0.0003), while the difference between experimental and predicted average MTM is statistically negligible (p < 0.02). The linear correlation between experimental and predicted MTM values is R(2) = 0.75. Copyright © 2013 Wiley Periodicals, Inc., a Wiley Company.
Bachmayr-Heyda, Anna; Reiner, Agnes T; Auer, Katharina; Sukhbaatar, Nyamdelger; Aust, Stefanie; Bachleitner-Hofmann, Thomas; Mesteri, Ildiko; Grunt, Thomas W; Zeillinger, Robert; Pils, Dietmar
2015-01-27
Circular RNAs are a recently (re-)discovered abundant RNA species with presumed function as miRNA sponges, thus part of the competing endogenous RNA network. We analysed the expression of circular and linear RNAs and proliferation in matched normal colon mucosa and tumour tissues. We predicted >1,800 circular RNAs and proved the existence of five randomly chosen examples using RT-qPCR. Interestingly, the ratio of circular to linear RNA isoforms was always lower in tumour compared to normal colon samples and even lower in colorectal cancer cell lines. Furthermore, this ratio correlated negatively with the proliferation index. The correlation of global circular RNA abundance (the circRNA index) and proliferation was validated in a non-cancerous proliferative disease, idiopathic pulmonary fibrosis, ovarian cancer cells compared to cultured normal ovarian epithelial cells, and 13 normal human tissues. We are the first to report a global reduction of circular RNA abundance in colorectal cancer cell lines and cancer compared to normal tissues and discovered a negative correlation of global circular RNA abundance and proliferation. This negative correlation seems to be a general principle in human tissues as validated with three different settings. Finally, we present a simple model how circular RNAs could accumulate in non-proliferating cells.
Bachmayr-Heyda, Anna; Reiner, Agnes T.; Auer, Katharina; Sukhbaatar, Nyamdelger; Aust, Stefanie; Bachleitner-Hofmann, Thomas; Mesteri, Ildiko; Grunt, Thomas W.; Zeillinger, Robert; Pils, Dietmar
2015-01-01
Circular RNAs are a recently (re-)discovered abundant RNA species with presumed function as miRNA sponges, thus part of the competing endogenous RNA network. We analysed the expression of circular and linear RNAs and proliferation in matched normal colon mucosa and tumour tissues. We predicted >1,800 circular RNAs and proved the existence of five randomly chosen examples using RT-qPCR. Interestingly, the ratio of circular to linear RNA isoforms was always lower in tumour compared to normal colon samples and even lower in colorectal cancer cell lines. Furthermore, this ratio correlated negatively with the proliferation index. The correlation of global circular RNA abundance (the circRNA index) and proliferation was validated in a non-cancerous proliferative disease, idiopathic pulmonary fibrosis, ovarian cancer cells compared to cultured normal ovarian epithelial cells, and 13 normal human tissues. We are the first to report a global reduction of circular RNA abundance in colorectal cancer cell lines and cancer compared to normal tissues and discovered a negative correlation of global circular RNA abundance and proliferation. This negative correlation seems to be a general principle in human tissues as validated with three different settings. Finally, we present a simple model how circular RNAs could accumulate in non-proliferating cells. PMID:25624062
Solazzo, Stephanie A; Liu, Zhengjun; Lobo, S Melvyn; Ahmed, Muneeb; Hines-Peralta, Andrew U; Lenkinski, Robert E; Goldberg, S Nahum
2005-08-01
To determine whether radiofrequency (RF)-induced heating can be correlated with background electrical conductivity in a controlled experimental phantom environment mimicking different background tissue electrical conductivities and to determine the potential electrical and physical basis for such a correlation by using computer modeling. The effect of background tissue electrical conductivity on RF-induced heating was studied in a controlled system of 80 two-compartment agar phantoms (with inner wells of 0.3%, 1.0%, or 36.0% NaCl) with background conductivity that varied from 0.6% to 5.0% NaCl. Mathematical modeling of the relationship between electrical conductivity and temperatures 2 cm from the electrode (T2cm) was performed. Next, computer simulation of RF heating by using two-dimensional finite-element analysis (ETherm) was performed with parameters selected to approximate the agar phantoms. Resultant heating, in terms of both the T2cm and the distance of defined thermal isotherms from the electrode surface, was calculated and compared with the phantom data. Additionally, electrical and thermal profiles were determined by using the computer modeling data and correlated by using linear regression analysis. For each inner compartment NaCl concentration, a negative exponential relationship was established between increased background NaCl concentration and the T2cm (R2= 0.64-0.78). Similar negative exponential relationships (r2 > 0.97%) were observed for the computer modeling. Correlation values (R2) between the computer and experimental data were 0.9, 0.9, and 0.55 for the 0.3%, 1.0%, and 36.0% inner NaCl concentrations, respectively. Plotting of the electrical field generated around the RF electrode identified the potential for a dramatic local change in electrical field distribution (ie, a second electrical peak ["E-peak"]) occurring at the interface between the two compartments of varied electrical background conductivity. Linear correlations between the E-peak and heating at T2cm (R2= 0.98-1.00) and the 50 degrees C isotherm (R2= 0.99-1.00) were established. These results demonstrate the strong relationship between background tissue conductivity and RF heating and further explain electrical phenomena that occur in a two-compartment system.
Ma, Jun; Xiao, Xiangming; Zhang, Yao; Doughty, Russell; Chen, Bangqian; Zhao, Bin
2018-10-15
Accurately estimating spatial-temporal patterns of gross primary production (GPP) is important for the global carbon cycle. Satellite-based light use efficiency (LUE) models are regarded as an efficient tool in simulating spatial-temporal dynamics of GPP. However, the accuracy assessment of GPP simulations from LUE models at both spatial and temporal scales remains a challenge. In this study, we simulated GPP of vegetation in China during 2007-2014 using a LUE model (Vegetation Photosynthesis Model, VPM) based on MODIS (moderate-resolution imaging spectroradiometer) images with 8-day temporal and 500-m spatial resolutions and NCEP (National Center for Environmental Prediction) climate data. Global Ozone Monitoring Instrument 2 (GOME-2) solar-induced chlorophyll fluorescence (SIF) data were used to compare with VPM simulated GPP (GPP VPM ) temporally and spatially using linear correlation analysis. Significant positive linear correlations exist between monthly GPP VPM and SIF data over a single year (2010) and multiple years (2007-2014) in most areas of China. GPP VPM is also significantly positive correlated with GOME-2 SIF (R 2 > 0.43) spatially for seasonal scales. However, poor consistency was detected between GPP VPM and SIF data at yearly scale. GPP dynamic trends have high spatial-temporal variation in China during 2007-2014. Temperature, leaf area index (LAI), and precipitation are the most important factors influence GPP VPM in the regions of East Qinghai-Tibet Plateau, Loss Plateau, and Southwestern China, respectively. The results of this study indicate that GPP VPM is temporally and spatially in line with GOME-2 SIF data, and space-borne SIF data have great potential for evaluating LUE-based GPP models. Copyright © 2018 Elsevier B.V. All rights reserved.
Pascoe, Elizabeth C; Edvardsson, David
Although beginning evidence suggests that the capacity to derive benefit from cancer-associated experiences may be influenced by some individual psychological characteristics and traits, little is known about predictors for finding benefit from prostate cancer. The aim of this study was to explore the correlates and predictors for finding benefit from prostate cancer among a sample of men undergoing androgen deprivation. Pearson correlation and multiple linear regression modeling were performed on data collected in an acute tertiary hospital outpatient setting (N = 209) between July 2011 and December 2013 to determine correlates and predictors for finding benefit from prostate cancer. Multiple linear regression modeling showed that while the 6 predictors of self-reported coping, depression, anxiety, distress, resilience, and hope explained 38% of the variance in finding benefit, coping provided the strongest and statistically significant predictive contribution. Self-reported coping was strongly predictive of finding benefit from prostate cancer, but questions remain about if subtypes of coping strategies can be more or less predictive of finding benefit. Self-reported levels of depression, anxiety, distress, resilience, and hope had a less predictive and nonsignificant role in finding benefit from prostate cancer and raise questions about their function in this subpopulation. The findings suggest that coping strategies can maximize finding benefit from prostate cancer. Knowledge of influential coping strategies for finding benefit from prostate cancer can be immensely valuable to support men in rebuilding positive meaning amid a changed illness reality. Developing practice initiatives that foster positive meaning-making coping strategies seems valuable.
Sun, Luanluan; Yu, Canqing; Lyu, Jun; Cao, Weihua; Pang, Zengchang; Chen, Weijian; Wang, Shaojie; Chen, Rongfu; Gao, Wenjing; Li, Liming
2014-01-01
To study the correlation between fingerprints and body size indicators in adulthood. Samples were composed of twins from two sub-registries of Chinese National Twin Registry (CNTR), including 405 twin pairs in Lishui and 427 twin pairs in Qingdao. All participants were asked to complete the field survey, consisting of questionnaire, physical examination and blood collection. From the 832 twin pairs, those with complete and clear demographic prints were selected as the target population. Information of Fingerprints pixel on the demographic characteristics of these 100 twin pairs and their related adulthood body type indicators were finally chosen to form this research. Descriptive statistics and mixed linear model were used for data analyses. In the mixed linear models adjusted for age and sex, data showed that the body fat percentage of those who had arches was higher than those who did not have the arches (P = 0.002), and those who had radial loops would have higher body fat percentage when compared with ones who did not (P = 0.041). After adjusted for age, there appeared no statistically significant correlation between radial loops and systolic pressure, but the correlations of arches (P = 0.031)and radial loops (P = 0.022) to diastolic pressure still remained statistically significant. Statistically significant correlations were found between fingerprint types and body size indicators, and the fingerprint types showed a useful tool to explore the effects of uterine environment on health status in one's adulthood.
Inflammation, homocysteine and carotid intima-media thickness.
Baptista, Alexandre P; Cacdocar, Sanjiva; Palmeiro, Hugo; Faísca, Marília; Carrasqueira, Herménio; Morgado, Elsa; Sampaio, Sandra; Cabrita, Ana; Silva, Ana Paula; Bernardo, Idalécio; Gome, Veloso; Neves, Pedro L
2008-01-01
Cardiovascular disease is the main cause of morbidity and mortality in chronic renal patients. Carotid intima-media thickness (CIMT) is one of the most accurate markers of atherosclerosis risk. In this study, the authors set out to evaluate a population of chronic renal patients to determine which factors are associated with an increase in intima-media thickness. We included 56 patients (F=22, M=34), with a mean age of 68.6 years, and an estimated glomerular filtration rate of 15.8 ml/min (calculated by the MDRD equation). Various laboratory and inflammatory parameters (hsCRP, IL-6 and TNF-alpha) were evaluated. All subjects underwent measurement of internal carotid artery intima-media thickness by high-resolution real-time B-mode ultrasonography using a 10 MHz linear transducer. Intima-media thickness was used as a dependent variable in a simple linear regression model, with the various laboratory parameters as independent variables. Only parameters showing a significant correlation with CIMT were evaluated in a multiple regression model: age (p=0.001), hemoglobin (p=00.3), logCRP (p=0.042), logIL-6 (p=0.004) and homocysteine (p=0.002). In the multiple regression model we found that age (p=0.001) and homocysteine (p=0.027) were independently correlated with CIMT. LogIL-6 did not reach statistical significance (p=0.057), probably due to the small population size. The authors conclude that age and homocysteine correlate with carotid intima-media thickness, and thus can be considered as markers/risk factors in chronic renal patients.
Artificial Intelligence Techniques for Predicting and Mapping Daily Pan Evaporation
NASA Astrophysics Data System (ADS)
Arunkumar, R.; Jothiprakash, V.; Sharma, Kirty
2017-09-01
In this study, Artificial Intelligence techniques such as Artificial Neural Network (ANN), Model Tree (MT) and Genetic Programming (GP) are used to develop daily pan evaporation time-series (TS) prediction and cause-effect (CE) mapping models. Ten years of observed daily meteorological data such as maximum temperature, minimum temperature, relative humidity, sunshine hours, dew point temperature and pan evaporation are used for developing the models. For each technique, several models are developed by changing the number of inputs and other model parameters. The performance of each model is evaluated using standard statistical measures such as Mean Square Error, Mean Absolute Error, Normalized Mean Square Error and correlation coefficient (R). The results showed that daily TS-GP (4) model predicted better with a correlation coefficient of 0.959 than other TS models. Among various CE models, CE-ANN (6-10-1) resulted better than MT and GP models with a correlation coefficient of 0.881. Because of the complex non-linear inter-relationship among various meteorological variables, CE mapping models could not achieve the performance of TS models. From this study, it was found that GP performs better for recognizing single pattern (time series modelling), whereas ANN is better for modelling multiple patterns (cause-effect modelling) in the data.
NASA Astrophysics Data System (ADS)
Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul
2017-03-01
We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the "resolution of the identity second-order Møller-Plesset perturbation theory" (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.
NASA Astrophysics Data System (ADS)
Seo, S.; Kim, J.; Lee, H.; Jeong, U.; Kim, W. V.; Holben, B. N.; Kim, S.
2013-12-01
Atmospheric aerosols are known to play a role in climate change while it also adverse effects on human health such as respiratory and cardiovascular diseases. Especially, in terms of air quality, many studies have been conducted to estimate surface-level particulate matter (PM) concentration by using the satellite measurements to overcome the spatial limitation of ground-based aerosol measurements. In this study, we investigate the relationship between the column aerosol optical depth (AOD) and the surface PM10 concentration using the aerosol measurements during the DRAGON (Distributed Regional Aerosol Gridded Observation Network) - Asia campaign took place in Seoul from March to May, 2012. Based on the physical relationship between AOD and PM concentration, we develop various empirical linear models and evaluate the performance of these models. The best correlation (r = 0.67) is shown when vertical and size distribution of aerosols are additionally considered by using the boundary layer height (BLH) from backscattered lidar signals and the effective radius provided in AERONET inversion products. Similarly, MODIS AOD divided by BLH shows the best correlation with hourly PM10 (r = 0.62). We also identify the variability of correlations between AOD and PM10 depending on the environment characteristics in a complex megacity, Seoul by using the aerosol optical properties measured at mesoscale-level at 10 AERONET sites during the DRAGON campaign. Both AERONET and MODIS show higher correlation in residential area than near source area. Finally, we investigate the seasonal effects on the performance of various empirical linear models and find important factors of each season in PM estimation.
Miura, Michiaki; Nakamura, Junichi; Matsuura, Yusuke; Wako, Yasushi; Suzuki, Takane; Hagiwara, Shigeo; Orita, Sumihisa; Inage, Kazuhide; Kawarai, Yuya; Sugano, Masahiko; Nawata, Kento; Ohtori, Seiji
2017-12-16
Finite element analysis (FEA) of the proximal femur has been previously validated with large mesh size, but these were insufficient to simulate the model with small implants in recent studies. This study aimed to validate the proximal femoral computed tomography (CT)-based specimen-specific FEA model with smaller mesh size using fresh frozen cadavers. Twenty proximal femora from 10 cadavers (mean age, 87.1 years) were examined. CT was performed on all specimens with a calibration phantom. Nonlinear FEA prediction with stance configuration was performed using Mechanical Finder (mesh,1.5 mm tetrahedral elements; shell thickness, 0.2 mm; Poisson's coefficient, 0.3), in comparison with mechanical testing. Force was applied at a fixed vertical displacement rate, and the magnitude of the applied load and displacement were continuously recorded. The fracture load and stiffness were calculated from force-displacement curve, and the correlation between mechanical testing and FEA prediction was examined. A pilot study with one femur revealed that the equations proposed by Keller for vertebra were the most reproducible for calculating Young's modulus and the yield stress of elements of the proximal femur. There was a good linear correlation between fracture loads of mechanical testing and FEA prediction (R 2 = 0.6187) and between the stiffness of mechanical testing and FEA prediction (R 2 = 0.5499). There was a good linear correlation between fracture load and stiffness (R 2 = 0.6345) in mechanical testing and an excellent correlation between these (R 2 = 0.9240) in FEA prediction. CT-based specimen-specific FEA model of the proximal femur with small element size was validated using fresh frozen cadavers. The equations proposed by Keller for vertebra were found to be the most reproducible for the proximal femur in elderly people.
Benjamin, B; Sahu, M; Bhatnagar, U; Abhyankar, D; Srinivas, N R
2012-04-01
Literature data on the clinical pharmacokinetics of various VEGFR-2 inhibitors along with in vitro potency data were correlated and a linear relationship was established in spite of limited data set. In this work, a model set comprised of axitinib, recentin, sunitinib, pazopanib, and sorafenib were used. The in vitro potencies of the model set compounds were correlated with the published unbound plasma concentrations (Cmax, Cavg, Ctrough). The established linear regression (r2>0.90) equation was used to predict Cmax, Cavg, Ctrough of the 'prediction set' (motesanib, telatinib, CP547632, vatalanib, vandetanib) using in vitro potency and unbound protein free fraction. Cavg and Ctrough of prediction set were closely matched (0.2-1.8 fold of reported), demonstrating the usefulness of such predictions for tracking the target related modulation and/or efficacy signals within the clinically optimized population average. In case of Cmax where correlation was least anticipated, the predicted values were within 0.1-1.1 fold of those reported. Such predictions of appropriate parameters would provide rough estimates of whether or not therapeutically relevant dose(s) have been administered when clinical investigations of novel agents of this class are being performed. Therefore, it may aid in increasing clinical doses to a desired level if safety of the compound does not compromise such dose increases. In conclusion, the proposed model may prospectively guide the dosing strategies and would greatly aid the development of novel compounds in this class. © Georg Thieme Verlag KG Stuttgart · New York.
Humphries, Stephen M; Yagihashi, Kunihiro; Huckleberry, Jason; Rho, Byung-Hak; Schroeder, Joyce D; Strand, Matthew; Schwarz, Marvin I; Flaherty, Kevin R; Kazerooni, Ella A; van Beek, Edwin J R; Lynch, David A
2017-10-01
Purpose To evaluate associations between pulmonary function and both quantitative analysis and visual assessment of thin-section computed tomography (CT) images at baseline and at 15-month follow-up in subjects with idiopathic pulmonary fibrosis (IPF). Materials and Methods This retrospective analysis of preexisting anonymized data, collected prospectively between 2007 and 2013 in a HIPAA-compliant study, was exempt from additional institutional review board approval. The extent of lung fibrosis at baseline inspiratory chest CT in 280 subjects enrolled in the IPF Network was evaluated. Visual analysis was performed by using a semiquantitative scoring system. Computer-based quantitative analysis included CT histogram-based measurements and a data-driven textural analysis (DTA). Follow-up CT images in 72 of these subjects were also analyzed. Univariate comparisons were performed by using Spearman rank correlation. Multivariate and longitudinal analyses were performed by using a linear mixed model approach, in which models were compared by using asymptotic χ 2 tests. Results At baseline, all CT-derived measures showed moderate significant correlation (P < .001) with pulmonary function. At follow-up CT, changes in DTA scores showed significant correlation with changes in both forced vital capacity percentage predicted (ρ = -0.41, P < .001) and diffusing capacity for carbon monoxide percentage predicted (ρ = -0.40, P < .001). Asymptotic χ 2 tests showed that inclusion of DTA score significantly improved fit of both baseline and longitudinal linear mixed models in the prediction of pulmonary function (P < .001 for both). Conclusion When compared with semiquantitative visual assessment and CT histogram-based measurements, DTA score provides additional information that can be used to predict diminished function. Automatic quantification of lung fibrosis at CT yields an index of severity that correlates with visual assessment and functional change in subjects with IPF. © RSNA, 2017.
Emission and distribution of phosphine in paddy fields and its relationship with greenhouse gases.
Chen, Weiyi; Niu, Xiaojun; An, Shaorong; Sheng, Hong; Tang, Zhenghua; Yang, Zhiquan; Gu, Xiaohong
2017-12-01
Phosphine (PH 3 ), as a gaseous phosphide, plays an important role in the phosphorus cycle in ecosystems. In this study, the emission and distribution of phosphine, carbon dioxide (CO 2 ) and methane (CH 4 ) in paddy fields were investigated to speculate the future potential impacts of enhanced greenhouse effect on phosphorus cycle involved in phosphine by the method of Pearson correlation analysis and multiple linear regression analysis. During the whole period of rice growth, there was a significant positive correlation between CO 2 emission flux and PH 3 emission flux (r=0.592, p=0.026, n=14). Similarly, a significant positive correlation of emission flux was also observed between CH 4 and PH 3 (r=0.563, p=0.036, n=14). The linear regression relationship was determined as [PH 3 ] flux =0.007[CO 2 ] flux +0.063[CH 4 ] flux -4.638. No significant differences were observed for all values of matrix-bound phosphine (MBP), soil carbon dioxide (SCO 2 ), and soil methane (SCH 4 ) in paddy soils. However, there was a significant positive correlation between MBP and SCO 2 at heading, flowering and ripening stage. The correlation coefficients were 0.909, 0.890 and 0.827, respectively. In vertical distribution, MBP had the analogical variation trend with SCO 2 and SCH 4 . Through Pearson correlation analysis and multiple stepwise linear regression analysis, pH, redox potential (Eh), total phosphorus (TP) and acid phosphatase (ACP) were identified as the principal factors affecting MBP levels, with correlative rankings of Eh>pH>TP>ACP. The multiple stepwise regression model ([MBP]=0.456∗[ACP]+0.235∗[TP]-1.458∗[Eh]-36.547∗[pH]+352.298) was obtained. The findings in this study hold great reference values to the global biogeochemical cycling of phosphorus in the future. Copyright © 2017 Elsevier B.V. All rights reserved.
Goodarzi, Mohammad; Jensen, Richard; Vander Heyden, Yvan
2012-12-01
A Quantitative Structure-Retention Relationship (QSRR) is proposed to estimate the chromatographic retention of 83 diverse drugs on a Unisphere poly butadiene (PBD) column, using isocratic elutions at pH 11.7. Previous work has generated QSRR models for them using Classification And Regression Trees (CART). In this work, Ant Colony Optimization is used as a feature selection method to find the best molecular descriptors from a large pool. In addition, several other selection methods have been applied, such as Genetic Algorithms, Stepwise Regression and the Relief method, not only to evaluate Ant Colony Optimization as a feature selection method but also to investigate its ability to find the important descriptors in QSRR. Multiple Linear Regression (MLR) and Support Vector Machines (SVMs) were applied as linear and nonlinear regression methods, respectively, giving excellent correlation between the experimental, i.e. extrapolated to a mobile phase consisting of pure water, and predicted logarithms of the retention factors of the drugs (logk(w)). The overall best model was the SVM one built using descriptors selected by ACO. Copyright © 2012 Elsevier B.V. All rights reserved.
A Lagrangian effective field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlah, Zvonimir; White, Martin; Aviles, Alejandro
We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The `new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. Furthermore, all the perturbative models fare better than linear theory.« less
A Lagrangian effective field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlah, Zvonimir; White, Martin; Aviles, Alejandro, E-mail: zvlah@stanford.edu, E-mail: mwhite@berkeley.edu, E-mail: aviles@berkeley.edu
We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The 'new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. All the perturbative models fare better than linear theory.« less
A Lagrangian effective field theory
Vlah, Zvonimir; White, Martin; Aviles, Alejandro
2015-09-02
We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The `new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. Furthermore, all the perturbative models fare better than linear theory.« less
Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling.
Perdikaris, P; Raissi, M; Damianou, A; Lawrence, N D; Karniadakis, G E
2017-02-01
Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.
Structural Equation Modeling: A Framework for Ocular and Other Medical Sciences Research
Christ, Sharon L.; Lee, David J.; Lam, Byron L.; Diane, Zheng D.
2017-01-01
Structural equation modeling (SEM) is a modeling framework that encompasses many types of statistical models and can accommodate a variety of estimation and testing methods. SEM has been used primarily in social sciences but is increasingly used in epidemiology, public health, and the medical sciences. SEM provides many advantages for the analysis of survey and clinical data, including the ability to model latent constructs that may not be directly observable. Another major feature is simultaneous estimation of parameters in systems of equations that may include mediated relationships, correlated dependent variables, and in some instances feedback relationships. SEM allows for the specification of theoretically holistic models because multiple and varied relationships may be estimated together in the same model. SEM has recently expanded by adding generalized linear modeling capabilities that include the simultaneous estimation of parameters of different functional form for outcomes with different distributions in the same model. Therefore, mortality modeling and other relevant health outcomes may be evaluated. Random effects estimation using latent variables has been advanced in the SEM literature and software. In addition, SEM software has increased estimation options. Therefore, modern SEM is quite general and includes model types frequently used by health researchers, including generalized linear modeling, mixed effects linear modeling, and population average modeling. This article does not present any new information. It is meant as an introduction to SEM and its uses in ocular and other health research. PMID:24467557
Groenendaal, D; Freijer, J; de Mik, D; Bouw, M R; Danhof, M; de Lange, E C M
2007-01-01
Background and purpose: Biophase equilibration must be considered to gain insight into the mechanisms underlying the pharmacokinetic-pharmacodynamic (PK-PD) correlations of opioids. The objective was to characterise in a quantitative manner the non-linear distribution kinetics of morphine in brain. Experimental approach: Male rats received a 10-min infusion of 4 mg kg−1 of morphine, combined with a continuous infusion of the P-glycoprotein (Pgp) inhibitor GF120918 or vehicle, or 40 mg kg−1 morphine alone. Unbound extracellular fluid (ECF) concentrations obtained by intracerebral microdialysis and total blood concentrations were analysed using a population modelling approach. Key results: Blood pharmacokinetics of morphine was best described with a three-compartment model and was not influenced by GF120918. Non-linear distribution kinetics in brain ECF was observed with increasing dose. A one compartment distribution model was developed, with separate expressions for passive diffusion, active saturable influx and active efflux by Pgp. The passive diffusion rate constant was 0.0014 min−1. The active efflux rate constant decreased from 0.0195 min−1 to 0.0113 min−1 in the presence of GF120918. The active influx was insensitive to GF120918 and had a maximum transport (Nmax/Vecf) of 0.66 ng min−1 ml−1 and was saturated at low concentrations of morphine (C50=9.9 ng ml−1). Conclusions and implications: Brain distribution of morphine is determined by three factors: limited passive diffusion; active efflux, reduced by 42% by Pgp inhibition; low capacity active uptake. This implies blood concentration-dependency and sensitivity to drug-drug interactions. These factors should be taken into account in further investigations on PK-PD correlations of morphine. PMID:17471182
Li, Linling; Huang, Gan; Lin, Qianqian; Liu, Jia; Zhang, Shengli; Zhang, Zhiguo
2018-01-01
The level of pain perception is correlated with the magnitude of pain-evoked brain responses, such as laser-evoked potentials (LEP), across trials. The positive LEP-pain relationship lays the foundation for pain prediction based on single-trial LEP, but cross-individual pain prediction does not have a good performance because the LEP-pain relationship exhibits substantial cross-individual difference. In this study, we aim to explain the cross-individual difference in the LEP-pain relationship using inter-stimulus EEG (isEEG) features. The isEEG features (root mean square as magnitude and mean square successive difference as temporal variability) were estimated from isEEG data (at full band and five frequency bands) recorded between painful stimuli. A linear model was fitted to investigate the relationship between pain ratings and LEP response for fast-pain trials on a trial-by-trial basis. Then the correlation between isEEG features and the parameters of LEP-pain model (slope and intercept) was evaluated. We found that the magnitude and temporal variability of isEEG could modulate the parameters of an individual's linear LEP-pain model for fast-pain trials. Based on this, we further developed a new individualized fast-pain prediction scheme, which only used training individuals with similar isEEG features as the test individual to train the fast-pain prediction model, and obtained improved accuracy in cross-individual fast-pain prediction. The findings could help elucidate the neural mechanism of cross-individual difference in pain experience and the proposed fast-pain prediction scheme could be potentially used as a practical and feasible pain prediction method in clinical practice. PMID:29904336
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José
2018-03-28
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José
2018-01-01
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023
The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation
2015-08-20
way correlations. For instance, if crime waves are associated with increases in unemployment or drops in police presence, that would be hard to...time lag, ai , bj are parameters in a linear combination, 1, 2 are error terms, and Prepared for Dr. Harold Hawkins US Government Contract...selecting a proper representation for the underlying data. A qualitative comparison of GC and DTW methods on World Bank data indicates that both methods
Nonlinear spherical perturbations in quintessence models of dark energy
NASA Astrophysics Data System (ADS)
Pratap Rajvanshi, Manvendra; Bagla, J. S.
2018-06-01
Observations have confirmed the accelerated expansion of the universe. The accelerated expansion can be modelled by invoking a cosmological constant or a dynamical model of dark energy. A key difference between these models is that the equation of state parameter w for dark energy differs from ‑1 in dynamical dark energy (DDE) models. Further, the equation of state parameter is not constant for a general DDE model. Such differences can be probed using the variation of scale factor with time by measuring distances. Another significant difference between the cosmological constant and DDE models is that the latter must cluster. Linear perturbation analysis indicates that perturbations in quintessence models of dark energy do not grow to have a significant amplitude at small length scales. In this paper we study the response of quintessence dark energy to non-linear perturbations in dark matter. We use a fully relativistic model for spherically symmetric perturbations. In this study we focus on thawing models. We find that in response to non-linear perturbations in dark matter, dark energy perturbations grow at a faster rate than expected in linear perturbation theory. We find that dark energy perturbation remains localised and does not diffuse out to larger scales. The dominant drivers of the evolution of dark energy perturbations are the local Hubble flow and a supression of gradients of the scalar field. We also find that the equation of state parameter w changes in response to perturbations in dark matter such that it also becomes a function of position. The variation of w in space is correlated with density contrast for matter. Variation of w and perturbations in dark energy are more pronounced in response to large scale perturbations in matter while the dependence on the amplitude of matter perturbations is much weaker.
Ivanciuc, O; Ivanciuc, T; Klein, D J; Seitz, W A; Balaban, A T
2001-02-01
Quantitative structure-retention relationships (QSRR) represent statistical models that quantify the connection between the molecular structure and the chromatographic retention indices of organic compounds, allowing the prediction of retention indices of novel, not yet synthesized compounds, solely from their structural descriptors. Using multiple linear regression, QSRR models for the gas chromatographic Kováts retention indices of 129 alkylbenzenes are generated using molecular graph descriptors. The correlational ability of structural descriptors computed from 10 molecular matrices is investigated, showing that the novel reciprocal matrices give numerical indices with improved correlational ability. A QSRR equation with 5 graph descriptors gives the best calibration and prediction results, demonstrating the usefulness of the molecular graph descriptors in modeling chromatographic retention parameters. The sequential orthogonalization of descriptors suggests simpler QSRR models by eliminating redundant structural information.
De Carli, Margherita M; Baccarelli, Andrea A; Trevisi, Letizia; Pantic, Ivan; Brennan, Kasey JM; Hacker, Michele R; Loudon, Holly; Brunst, Kelly J; Wright, Robert O; Wright, Rosalind J; Just, Allan C
2017-01-01
Aim: We compared predictive modeling approaches to estimate placental methylation using cord blood methylation. Materials & methods: We performed locus-specific methylation prediction using both linear regression and support vector machine models with 174 matched pairs of 450k arrays. Results: At most CpG sites, both approaches gave poor predictions in spite of a misleading improvement in array-wide correlation. CpG islands and gene promoters, but not enhancers, were the genomic contexts where the correlation between measured and predicted placental methylation levels achieved higher values. We provide a list of 714 sites where both models achieved an R2 ≥0.75. Conclusion: The present study indicates the need for caution in interpreting cross-tissue predictions. Few methylation sites can be predicted between cord blood and placenta. PMID:28234020
Liu, Jian; Miller, William H
2007-06-21
It is shown how quantum mechanical time correlation functions [defined, e.g., in Eq. (1.1)] can be expressed, without approximation, in the same form as the linearized approximation of the semiclassical initial value representation (LSC-IVR), or classical Wigner model, for the correlation function [cf. Eq. (2.1)], i.e., as a phase space average (over initial conditions for trajectories) of the Wigner functions corresponding to the two operators. The difference is that the trajectories involved in the LSC-IVR evolve classically, i.e., according to the classical equations of motion, while in the exact theory they evolve according to generalized equations of motion that are derived here. Approximations to the exact equations of motion are then introduced to achieve practical methods that are applicable to complex (i.e., large) molecular systems. Four such methods are proposed in the paper--the full Wigner dynamics (full WD) and the second order WD based on "Wigner trajectories" [H. W. Lee and M. D. Scully, J. Chem. Phys. 77, 4604 (1982)] and the full Donoso-Martens dynamics (full DMD) and the second order DMD based on "Donoso-Martens trajectories" [A. Donoso and C. C. Martens, Phys. Rev. Lett. 8722, 223202 (2001)]--all of which can be viewed as generalizations of the original LSC-IVR method. Numerical tests of the four versions of this new approach are made for two anharmonic model problems, and for each the momentum autocorrelation function (i.e., operators linear in coordinate or momentum operators) and the force autocorrelation function (nonlinear operators) have been calculated. These four new approximate treatments are indeed seen to be significant improvements to the original LSC-IVR approximation.
Capisizu, Ana; Aurelian, Sorina; Zamfirescu, Andreea; Omer, Ioana; Haras, Monica; Ciobotaru, Camelia; Onose, Liliana; Spircu, Tiberiu; Onose, Gelu
2015-01-01
To assess the impact of socio-demographic and comorbidity factors, and quantified depressive symptoms on disability in inpatients. Observational cross-sectional study, including a number of 80 elderly (16 men, 64 women; mean age 72.48 years; standard deviation 9.95 years) admitted in the Geriatrics Clinic of "St. Luca" Hospital, Bucharest, between May-July, 2012. We used the Functional Independence Measure, Geriatric Depression Scale and an array of socio-demographic and poly-pathology parameters. Statistical analysis included Wilcoxon and Kruskal-Wallis tests for ordinal variables, linear bivariate correlations, general linear model analysis, ANOVA. FIM scores were negatively correlated with age (R=-0.301; 95%CI=-0.439 -0.163; p=0.007); GDS scores had a statistically significant negative correlation (R=-0.322; 95% CI=-0.324 -0.052; p=0.004) with FIM scores. A general linear model, including other variables (gender, age, provenance, matrimonial state, living conditions, education, respectively number of chronic illnesses) as factors, found living conditions (p=0.027) and the combination of matrimonial state and gender (p=0.004) to significantly influence FIM scores. ANOVA showed significant differences in FIM scores stratified by the number of chronic diseases (p=0.035). Our study objectified the negative impact of depression on functional status; interestingly, education had no influence on FIM scores; living conditions and a combination of matrimonial state and gender had an important impact: patients with living spouses showed better functional scores than divorced/widowers; the number of chronic diseases also affected the FIM scores: lower in patients with significant polypathology. These findings should be considered when designing geriatric rehabilitation programs, especially for home--including skilled--cares.
Impact of kerogen heterogeneity on sorption of organic pollutants. 2. Sorption equilibria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, C.; Yu, Z.Q.; Xiao, B.H.
2009-08-15
Phenanthrene and naphthalene sorption isotherms were measured for three different series of kerogen materials using completely mixed batch reactors. Sorption isotherms were nonlinear for each sorbate-sorbent system, and the Freundlich isotherm equation fit the sorption data well. The Freundlich isotherm linearity parameter n ranged from 0.192 to 0.729 for phenanthrene and from 0.389 to 0.731 for naphthalene. The n values correlated linearly with rigidity and aromaticity of the kerogen matrix, but the single-point, organic carbon-normalized distribution coefficients varied dramatically among the tested sorbents. A dual-mode sorption equation consisting of a linear partitioning domain and a Langmuir adsorption domain adequately quantifiedmore » the overall sorption equilibrium for each sorbent-sorbate system. Both models fit the data well, with r{sup 2} values of 0.965 to 0.996 for the Freundlich model and 0.963 to 0.997 for the dual-mode model for the phenanthrene sorption isotherms. The dual-mode model fitting results showed that as the rigidity and aromaticity of the kerogen matrix increased, the contribution of the linear partitioning domain to the overall sorption equilibrium decreased, whereas the contribution of the Langmuir adsorption domain increased. The present study suggested that kerogen materials found in soils and sediments should not be treated as a single, unified, carbonaceous sorbent phase.« less
Alghanim, Hussain; Antunes, Joana; Silva, Deborah Soares Bispo Santos; Alho, Clarice Sampaio; Balamurugan, Kuppareddi; McCord, Bruce
2017-11-01
Recent developments in the analysis of epigenetic DNA methylation patterns have demonstrated that certain genetic loci show a linear correlation with chronological age. It is the goal of this study to identify a new set of epigenetic methylation markers for the forensic estimation of human age. A total number of 27 CpG sites at three genetic loci, SCGN, DLX5 and KLF14, were examined to evaluate the correlation of their methylation status with age. These sites were evaluated using 72 blood samples and 91 saliva samples collected from volunteers with ages ranging from 5 to 73 years. DNA was bisulfite modified followed by PCR amplification and pyrosequencing to determine the level of DNA methylation at each CpG site. In this study, certain CpG sites in SCGN and KLF14 loci showed methylation levels that were correlated with chronological age, however, the tested CpG sites in DLX5 did not show a correlation with age. Using a 52-saliva sample training set, two age-predictor models were developed by means of a multivariate linear regression analysis for age prediction. The two models performed similarly with a single-locus model explaining 85% of the age variance at a mean absolute deviation of 5.8 years and a dual-locus model explaining 84% of the age variance with a mean absolute deviation of 6.2 years. In the validation set, the mean absolute deviation was measured to be 8.0 years and 7.1 years for the single- and dual-locus model, respectively. Another age predictor model was also developed using a 40-blood sample training set that accounted for 71% of the age variance. This model gave a mean absolute deviation of 6.6 years for the training set and 10.3years for the validation set. The results indicate that specific CpGs in SCGN and KLF14 can be used as potential epigenetic markers to estimate age using saliva and blood specimens. These epigenetic markers could provide important information in cases where the determination of a suspect's age is critical in developing investigative leads. Copyright © 2017. Published by Elsevier B.V.
Heavy and light hadron production and D-hadron correlation in relativistic heavy-ion collisions
Cao, Shanshan; Luo, Tan; He, Yayun; ...
2017-09-25
We establish a linear Boltzmann transport (LBT) model coupled to hydrodynamical background to study hard parton evolution in heavy-ion collisions. Both elastic and inelastic scatterings are included in our calculations; and heavy and light flavor partons are treated on the same footing. Within this LBT model, we provide good descriptions of heavy and light hadron suppression and anisotropic flow in heavy-ion collisions. Angular correlation functions between heavy and light flavor hadrons are studied for the first time and shown able to quantify not only the amount of heavy quark energy loss, but also how the parton energy is re-distributed inmore » parton showers.« less
Heavy and light hadron production and D-hadron correlation in relativistic heavy-ion collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Shanshan; Luo, Tan; He, Yayun
We establish a linear Boltzmann transport (LBT) model coupled to hydrodynamical background to study hard parton evolution in heavy-ion collisions. Both elastic and inelastic scatterings are included in our calculations; and heavy and light flavor partons are treated on the same footing. Within this LBT model, we provide good descriptions of heavy and light hadron suppression and anisotropic flow in heavy-ion collisions. Angular correlation functions between heavy and light flavor hadrons are studied for the first time and shown able to quantify not only the amount of heavy quark energy loss, but also how the parton energy is re-distributed inmore » parton showers.« less
Time Evolution of Modeled Reynolds Stresses in Planar Homogeneous Flows
NASA Technical Reports Server (NTRS)
Jongen, T.; Gatski, T. B.
1997-01-01
The analytic expression of the time evolution of the Reynolds stress anisotropy tensor in all planar homogeneous flows is obtained by exact integration of the modeled differential Reynolds stress equations. The procedure is based on results of tensor representation theory, is applicable for general pressure-strain correlation tensors, and can account for any additional turbulence anisotropy effects included in the closure. An explicit solution of the resulting system of scalar ordinary differential equations is obtained for the case of a linear pressure-strain correlation tensor. The properties of this solution are discussed, and the dynamic behavior of the Reynolds stresses is studied, including limit cycles and sensitivity to initial anisotropies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Anirban; Maitra, Saikat; Ghosh, Sobhan
Highlights: • Sonochemical synthesis of iron-doped zinc oxide nanoparticles. • Green synthesis without alkali at room temperature. • Characterization by UV–vis spectroscopy, FESEM, XRD and EDX. • Influence of precursor composition on characteristics. • Composition and characteristics are correlated. - Abstract: Iron-doped zinc oxide nanoparticles have been synthesized sonochemically from aqueous acetyl acetonate precursors of different proportions. Synthesized nanoparticles were characterized with UV–vis spectroscopy, X-ray diffraction and microscopy. Influences of precursor mixture on the characteristics have been examined and modeled. Linear correlations have been proposed between dopant dosing, extent of doping and band gap energy. Experimental data corroborated with themore » proposed models.« less
Appraisal of jump distributions in ensemble-based sampling algorithms
NASA Astrophysics Data System (ADS)
Dejanic, Sanda; Scheidegger, Andreas; Rieckermann, Jörg; Albert, Carlo
2017-04-01
Sampling Bayesian posteriors of model parameters is often required for making model-based probabilistic predictions. For complex environmental models, standard Monte Carlo Markov Chain (MCMC) methods are often infeasible because they require too many sequential model runs. Therefore, we focused on ensemble methods that use many Markov chains in parallel, since they can be run on modern cluster architectures. Little is known about how to choose the best performing sampler, for a given application. A poor choice can lead to an inappropriate representation of posterior knowledge. We assessed two different jump moves, the stretch and the differential evolution move, underlying, respectively, the software packages EMCEE and DREAM, which are popular in different scientific communities. For the assessment, we used analytical posteriors with features as they often occur in real posteriors, namely high dimensionality, strong non-linear correlations or multimodality. For posteriors with non-linear features, standard convergence diagnostics based on sample means can be insufficient. Therefore, we resorted to an entropy-based convergence measure. We assessed the samplers by means of their convergence speed, robustness and effective sample sizes. For posteriors with strongly non-linear features, we found that the stretch move outperforms the differential evolution move, w.r.t. all three aspects.
Weak lensing shear and aperture mass from linear to non-linear scales
NASA Astrophysics Data System (ADS)
Munshi, Dipak; Valageas, Patrick; Barber, Andrew J.
2004-05-01
We describe the predictions for the smoothed weak lensing shear, γs, and aperture mass,Map, of two simple analytical models of the density field: the minimal tree model and the stellar model. Both models give identical results for the statistics of the three-dimensional density contrast smoothed over spherical cells and only differ by the detailed angular dependence of the many-body density correlations. We have shown in previous work that they also yield almost identical results for the probability distribution function (PDF) of the smoothed convergence, κs. We find that the two models give rather close results for both the shear and the positive tail of the aperture mass. However, we note that at small angular scales (θs<~ 2 arcmin) the tail of the PDF, , for negative Map shows a strong variation between the two models, and the stellar model actually breaks down for θs<~ 0.4 arcmin and Map < 0. This shows that the statistics of the aperture mass provides a very precise probe of the detailed structure of the density field, as it is sensitive to both the amplitude and the detailed angular behaviour of the many-body correlations. On the other hand, the minimal tree model shows good agreement with numerical simulations over all the scales and redshifts of interest, while both models provide a good description of the PDF, , of the smoothed shear components. Therefore, the shear and the aperture mass provide robust and complementary tools to measure the cosmological parameters as well as the detailed statistical properties of the density field.
Routledge, Kylie M; Williams, Leanne M; Harris, Anthony W F; Schofield, Peter R; Clark, C Richard; Gatt, Justine M
2018-06-01
Currently there is a very limited understanding of how mental wellbeing versus anxiety and depression symptoms are associated with emotion processing behaviour. For the first time, we examined these associations using a behavioural emotion task of positive and negative facial expressions in 1668 healthy adult twins. Linear mixed model results suggested faster reaction times to happy facial expressions was associated with higher wellbeing scores, and slower reaction times with higher depression and anxiety scores. Multivariate twin modelling identified a significant genetic correlation between depression and anxiety symptoms and reaction time to happy facial expressions, in the absence of any significant correlations with wellbeing. We also found a significant negative phenotypic relationship between depression and anxiety symptoms and accuracy for identifying neutral emotions, although the genetic or environment correlations were not significant in the multivariate model. Overall, the phenotypic relationships between speed of identifying happy facial expressions and wellbeing on the one hand, versus depression and anxiety symptoms on the other, were in opposing directions. Twin modelling revealed a small common genetic correlation between response to happy faces and depression and anxiety symptoms alone, suggesting that wellbeing and depression and anxiety symptoms show largely independent relationships with emotion processing at the behavioral level. Copyright © 2018 Elsevier B.V. All rights reserved.