Sample records for random coefficient modeling

  1. Tests of Hypotheses Arising In the Correlated Random Coefficient Model*

    PubMed Central

    Heckman, James J.; Schmierer, Daniel

    2010-01-01

    This paper examines the correlated random coefficient model. It extends the analysis of Swamy (1971), who pioneered the uncorrelated random coefficient model in economics. We develop the properties of the correlated random coefficient model and derive a new representation of the variance of the instrumental variable estimator for that model. We develop tests of the validity of the correlated random coefficient model against the null hypothesis of the uncorrelated random coefficient model. PMID:21170148

  2. Random effects coefficient of determination for mixed and meta-analysis models

    PubMed Central

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2011-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070

  3. Random effects coefficient of determination for mixed and meta-analysis models.

    PubMed

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  4. An instrumental variable random-coefficients model for binary outcomes

    PubMed Central

    Chesher, Andrew; Rosen, Adam M

    2014-01-01

    In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. PMID:25798048

  5. A Structural Modeling Approach to a Multilevel Random Coefficients Model.

    ERIC Educational Resources Information Center

    Rovine, Michael J.; Molenaar, Peter C. M.

    2000-01-01

    Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)

  6. Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Cudeck, Robert

    2009-01-01

    A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…

  7. Random diffusion and leverage effect in financial markets.

    PubMed

    Perelló, Josep; Masoliver, Jaume

    2003-03-01

    We prove that Brownian market models with random diffusion coefficients provide an exact measure of the leverage effect [J-P. Bouchaud et al., Phys. Rev. Lett. 87, 228701 (2001)]. This empirical fact asserts that past returns are anticorrelated with future diffusion coefficient. Several models with random diffusion have been suggested but without a quantitative study of the leverage effect. Our analysis lets us to fully estimate all parameters involved and allows a deeper study of correlated random diffusion models that may have practical implications for many aspects of financial markets.

  8. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  9. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  10. Bayesian dynamic modeling of time series of dengue disease case counts

    PubMed Central

    López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-01-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941

  11. Using Multisite Experiments to Study Cross-Site Variation in Treatment Effects: A Hybrid Approach with Fixed Intercepts and A Random Treatment Coefficient

    ERIC Educational Resources Information Center

    Bloom, Howard S.; Raudenbush, Stephen W.; Weiss, Michael J.; Porter, Kristin

    2017-01-01

    The present article considers a fundamental question in evaluation research: "By how much do program effects vary across sites?" The article first presents a theoretical model of cross-site impact variation and a related estimation model with a random treatment coefficient and fixed site-specific intercepts. This approach eliminates…

  12. Search for Directed Networks by Different Random Walk Strategies

    NASA Astrophysics Data System (ADS)

    Zhu, Zi-Qi; Jin, Xiao-Ling; Huang, Zhi-Long

    2012-03-01

    A comparative study is carried out on the efficiency of five different random walk strategies searching on directed networks constructed based on several typical complex networks. Due to the difference in search efficiency of the strategies rooted in network clustering, the clustering coefficient in a random walker's eye on directed networks is defined and computed to be half of the corresponding undirected networks. The search processes are performed on the directed networks based on Erdös—Rényi model, Watts—Strogatz model, Barabási—Albert model and clustered scale-free network model. It is found that self-avoiding random walk strategy is the best search strategy for such directed networks. Compared to unrestricted random walk strategy, path-iteration-avoiding random walks can also make the search process much more efficient. However, no-triangle-loop and no-quadrangle-loop random walks do not improve the search efficiency as expected, which is different from those on undirected networks since the clustering coefficient of directed networks are smaller than that of undirected networks.

  13. Improved estimates of partial volume coefficients from noisy brain MRI using spatial context.

    PubMed

    Manjón, José V; Tohka, Jussi; Robles, Montserrat

    2010-11-01

    This paper addresses the problem of accurate voxel-level estimation of tissue proportions in the human brain magnetic resonance imaging (MRI). Due to the finite resolution of acquisition systems, MRI voxels can contain contributions from more than a single tissue type. The voxel-level estimation of this fractional content is known as partial volume coefficient estimation. In the present work, two new methods to calculate the partial volume coefficients under noisy conditions are introduced and compared with current similar methods. Concretely, a novel Markov Random Field model allowing sharp transitions between partial volume coefficients of neighbouring voxels and an advanced non-local means filtering technique are proposed to reduce the errors due to random noise in the partial volume coefficient estimation. In addition, a comparison was made to find out how the different methodologies affect the measurement of the brain tissue type volumes. Based on the obtained results, the main conclusions are that (1) both Markov Random Field modelling and non-local means filtering improved the partial volume coefficient estimation results, and (2) non-local means filtering was the better of the two strategies for partial volume coefficient estimation. Copyright 2010 Elsevier Inc. All rights reserved.

  14. Anomalous diffusion and dynamics of fluorescence recovery after photobleaching in the random-comb model

    NASA Astrophysics Data System (ADS)

    Yuste, S. B.; Abad, E.; Baumgaertner, A.

    2016-07-01

    We address the problem of diffusion on a comb whose teeth display varying lengths. Specifically, the length ℓ of each tooth is drawn from a probability distribution displaying power law behavior at large ℓ ,P (ℓ ) ˜ℓ-(1 +α ) (α >0 ). To start with, we focus on the computation of the anomalous diffusion coefficient for the subdiffusive motion along the backbone. This quantity is subsequently used as an input to compute concentration recovery curves mimicking fluorescence recovery after photobleaching experiments in comblike geometries such as spiny dendrites. Our method is based on the mean-field description provided by the well-tested continuous time random-walk approach for the random-comb model, and the obtained analytical result for the diffusion coefficient is confirmed by numerical simulations of a random walk with finite steps in time and space along the backbone and the teeth. We subsequently incorporate retardation effects arising from binding-unbinding kinetics into our model and obtain a scaling law characterizing the corresponding change in the diffusion coefficient. Finally, we show that recovery curves obtained with the help of the analytical expression for the anomalous diffusion coefficient cannot be fitted perfectly by a model based on scaled Brownian motion, i.e., a standard diffusion equation with a time-dependent diffusion coefficient. However, differences between the exact curves and such fits are small, thereby providing justification for the practical use of models relying on scaled Brownian motion as a fitting procedure for recovery curves arising from particle diffusion in comblike systems.

  15. Prediction of random-regression coefficient for daily milk yield after 305 days in milk by using the regression-coefficient estimates from the first 305 days.

    PubMed

    Yamazaki, Takeshi; Takeda, Hisato; Hagiya, Koichi; Yamaguchi, Satoshi; Sasaki, Osamu

    2018-03-13

    Because lactation periods in dairy cows lengthen with increasing total milk production, it is important to predict individual productivities after 305 days in milk (DIM) to determine the optimal lactation period. We therefore examined whether the random regression (RR) coefficient from 306 to 450 DIM (M2) can be predicted from those during the first 305 DIM (M1) by using a random regression model. We analyzed test-day milk records from 85690 Holstein cows in their first lactations and 131727 cows in their later (second to fifth) lactations. Data in M1 and M2 were analyzed separately by using different single-trait RR animal models. We then performed a multiple regression analysis of the RR coefficients of M2 on those of M1 during the first and later lactations. The first-order Legendre polynomials were practical covariates of random regression for the milk yields of M2. All RR coefficients for the additive genetic (AG) effect and the intercept for the permanent environmental (PE) effect of M2 had moderate to strong correlations with the intercept for the AG effect of M1. The coefficients of determination for multiple regression of the combined intercepts for the AG and PE effects of M2 on the coefficients for the AG effect of M1 were moderate to high. The daily milk yields of M2 predicted by using the RR coefficients for the AG effect of M1 were highly correlated with those obtained by using the coefficients of M2. Milk production after 305 DIM can be predicted by using the RR coefficient estimates of the AG effect during the first 305 DIM.

  16. Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients

    ERIC Educational Resources Information Center

    Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako

    2012-01-01

    Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…

  17. A scattering model for forested area

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Fung, A. K.

    1988-01-01

    A forested area is modeled as a volume of randomly oriented and distributed disc-shaped, or needle-shaped leaves shading a distribution of branches modeled as randomly oriented finite-length, dielectric cylinders above an irregular soil surface. Since the radii of branches have a wide range of sizes, the model only requires the length of a branch to be large compared with its radius which may be any size relative to the incident wavelength. In addition, the model also assumes the thickness of a disc-shaped leaf or the radius of a needle-shaped leaf is much smaller than the electromagnetic wavelength. The scattering phase matrices for disc, needle, and cylinder are developed in terms of the scattering amplitudes of the corresponding fields which are computed by the forward scattering theorem. These quantities along with the Kirchoff scattering model for a randomly rough surface are used in the standard radiative transfer formulation to compute the backscattering coefficient. Numerical illustrations for the backscattering coefficient are given as a function of the shading factor, incidence angle, leaf orientation distribution, branch orientation distribution, and the number density of leaves. Also illustrated are the properties of the extinction coefficient as a function of leaf and branch orientation distributions. Comparisons are made with measured backscattering coefficients from forested areas reported in the literature.

  18. Macroscopic damping model for structural dynamics with random polycrystalline configurations

    NASA Astrophysics Data System (ADS)

    Yang, Yantao; Cui, Junzhi; Yu, Yifan; Xiang, Meizhen

    2018-06-01

    In this paper the macroscopic damping model for dynamical behavior of the structures with random polycrystalline configurations at micro-nano scales is established. First, the global motion equation of a crystal is decomposed into a set of motion equations with independent single degree of freedom (SDOF) along normal discrete modes, and then damping behavior is introduced into each SDOF motion. Through the interpolation of discrete modes, the continuous representation of damping effects for the crystal is obtained. Second, from energy conservation law the expression of the damping coefficient is derived, and the approximate formula of damping coefficient is given. Next, the continuous damping coefficient for polycrystalline cluster is expressed, the continuous dynamical equation with damping term is obtained, and then the concrete damping coefficients for a polycrystalline Cu sample are shown. Finally, by using statistical two-scale homogenization method, the macroscopic homogenized dynamical equation containing damping term for the structures with random polycrystalline configurations at micro-nano scales is set up.

  19. Is the Non-Dipole Magnetic Field Random?

    NASA Technical Reports Server (NTRS)

    Walker, Andrew D.; Backus, George E.

    1996-01-01

    Statistical modelling of the Earth's magnetic field B has a long history. In particular, the spherical harmonic coefficients of scalar fields derived from B can be treated as Gaussian random variables. In this paper, we give examples of highly organized fields whose spherical harmonic coefficients pass tests for independent Gaussian random variables. The fact that coefficients at some depth may be usefully summarized as independent samples from a normal distribution need not imply that there really is some physical, random process at that depth. In fact, the field can be extremely structured and still be regarded for some purposes as random. In this paper, we examined the radial magnetic field B(sub r) produced by the core, but the results apply to any scalar field on the core-mantle boundary (CMB) which determines B outside the CMB.

  20. Modeling of Thermal Phase Noise in a Solid Core Photonic Crystal Fiber-Optic Gyroscope.

    PubMed

    Song, Ningfang; Ma, Kun; Jin, Jing; Teng, Fei; Cai, Wei

    2017-10-26

    A theoretical model of the thermal phase noise in a square-wave modulated solid core photonic crystal fiber-optic gyroscope has been established, and then verified by measurements. The results demonstrate a good agreement between theory and experiment. The contribution of the thermal phase noise to the random walk coefficient of the gyroscope is derived. A fiber coil with 2.8 km length is used in the experimental solid core photonic crystal fiber-optic gyroscope, showing a random walk coefficient of 9.25 × 10 -5 deg/√h.

  1. Effective Stochastic Model for Reactive Transport

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A. M.; Zheng, B.; Barajas-Solano, D. A.

    2017-12-01

    We propose an effective stochastic advection-diffusion-reaction (SADR) model. Unlike traditional advection-dispersion-reaction models, the SADR model describes mechanical and diffusive mixing as two separate processes. In the SADR model, the mechanical mixing is driven by random advective velocity with the variance given by the coefficient of mechanical dispersion. The diffusive mixing is modeled as a fickian diffusion with the effective diffusion coefficient. Both coefficients are given in terms of Peclet number (Pe) and the coefficient of molecular diffusion. We use the experimental results of to demonstrate that for transport and bimolecular reactions in porous media the SADR model is significantly more accurate than the traditional dispersion model, which overestimates the mass of the reaction product by as much as 25%.

  2. Temporal behavior of the effective diffusion coefficients for transport in heterogeneous saturated aquifers

    NASA Astrophysics Data System (ADS)

    Suciu, N.; Vamos, C.; Vereecken, H.; Vanderborght, J.; Hardelauf, H.

    2003-04-01

    When the small scale transport is modeled by a Wiener process and the large scale heterogeneity by a random velocity field, the effective coefficients, Deff, can be decomposed as sums between the local coefficient, D, a contribution of the random advection, Dadv, and a contribution of the randomness of the trajectory of plume center of mass, Dcm: Deff=D+Dadv-Dcm. The coefficient Dadv is similar to that introduced by Taylor in 1921, and more recent works associate it with the thermodynamic equilibrium. The ``ergodic hypothesis'' says that over large time intervals Dcm vanishes and the effect of the heterogeneity is described by Dadv=Deff-D. In this work we investigate numerically the long time behavior of the effective coefficients as well as the validity of the ergodic hypothesis. The transport in every realization of the velocity field is modeled with the Global Random Walk Algorithm, which is able to track as many particles as necessary to achieve a statistically reliable simulation of the process. Averages over realizations are further used to estimate mean coefficients and standard deviations. In order to remain in the frame of most of the theoretical approaches, the velocity field was generated in a linear approximation and the logarithm of the hydraulic conductivity was taken to be exponential decaying correlated with variance equal to 0.1. Our results show that even in these idealized conditions, the effective coefficients tend to asymptotic constant values only when the plume travels thousands of correlations lengths (while the first order theories usually predict Fickian behavior after tens of correlations lengths) and that the ergodicity conditions are still far from being met.

  3. Box-Cox Mixed Logit Model for Travel Behavior Analysis

    NASA Astrophysics Data System (ADS)

    Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.

    2010-09-01

    To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.

  4. Backscattering from a randomly rough dielectric surface

    NASA Technical Reports Server (NTRS)

    Fung, Adrian K.; Li, Zongqian; Chen, K. S.

    1992-01-01

    A backscattering model for scattering from a randomly rough dielectric surface is developed based on an approximate solution of a pair of integral equations for the tangential surface fields. Both like and cross-polarized scattering coefficients are obtained. It is found that the like polarized scattering coefficients contain two types of terms: single scattering terms and multiple scattering terms. The single scattering terms in like polarized scattering are shown to reduce the first-order solutions derived from the small perturbation method when the roughness parameters satisfy the slightly rough conditions. When surface roughnesses are large but the surface slope is small, only a single scattering term corresponding to the standard Kirchhoff model is significant. If the surface slope is large, the multiple scattering term will also be significant. The cross-polarized backscattering coefficients satisfy reciprocity and contain only multiple scattering terms. The difference between vertical and horizontal scattering coefficients is found to increase with the dielectric constant and is generally smaller than that predicted by the first-order small perturbation model. Good agreements are obtained between this model and measurements from statistically known surfaces.

  5. Mixed-effects varying-coefficient model with skewed distribution coupled with cause-specific varying-coefficient hazard model with random-effects for longitudinal-competing risks data analysis.

    PubMed

    Lu, Tao; Wang, Min; Liu, Guangying; Dong, Guang-Hui; Qian, Feng

    2016-01-01

    It is well known that there is strong relationship between HIV viral load and CD4 cell counts in AIDS studies. However, the relationship between them changes during the course of treatment and may vary among individuals. During treatments, some individuals may experience terminal events such as death. Because the terminal event may be related to the individual's viral load measurements, the terminal mechanism is non-ignorable. Furthermore, there exists competing risks from multiple types of events, such as AIDS-related death and other death. Most joint models for the analysis of longitudinal-survival data developed in literatures have focused on constant coefficients and assume symmetric distribution for the endpoints, which does not meet the needs for investigating the nature of varying relationship between HIV viral load and CD4 cell counts in practice. We develop a mixed-effects varying-coefficient model with skewed distribution coupled with cause-specific varying-coefficient hazard model with random-effects to deal with varying relationship between the two endpoints for longitudinal-competing risks survival data. A fully Bayesian inference procedure is established to estimate parameters in the joint model. The proposed method is applied to a multicenter AIDS cohort study. Various scenarios-based potential models that account for partial data features are compared. Some interesting findings are presented.

  6. The episodic random utility model unifies time trade-off and discrete choice approaches in health state valuation

    PubMed Central

    Craig, Benjamin M; Busschbach, Jan JV

    2009-01-01

    Background To present an episodic random utility model that unifies time trade-off and discrete choice approaches in health state valuation. Methods First, we introduce two alternative random utility models (RUMs) for health preferences: the episodic RUM and the more common instant RUM. For the interpretation of time trade-off (TTO) responses, we show that the episodic model implies a coefficient estimator, and the instant model implies a mean slope estimator. Secondly, we demonstrate these estimators and the differences between the estimates for 42 health states using TTO responses from the seminal Measurement and Valuation in Health (MVH) study conducted in the United Kingdom. Mean slopes are estimates with and without Dolan's transformation of worse-than-death (WTD) responses. Finally, we demonstrate an exploded probit estimator, an extension of the coefficient estimator for discrete choice data that accommodates both TTO and rank responses. Results By construction, mean slopes are less than or equal to coefficients, because slopes are fractions and, therefore, magnify downward errors in WTD responses. The Dolan transformation of WTD responses causes mean slopes to increase in similarity to coefficient estimates, yet they are not equivalent (i.e., absolute mean difference = 0.179). Unlike mean slopes, coefficient estimates demonstrate strong concordance with rank-based predictions (Lin's rho = 0.91). Combining TTO and rank responses under the exploded probit model improves the identification of health state values, decreasing the average width of confidence intervals from 0.057 to 0.041 compared to TTO only results. Conclusion The episodic RUM expands upon the theoretical framework underlying health state valuation and contributes to health econometrics by motivating the selection of coefficient and exploded probit estimators for the analysis of TTO and rank responses. In future MVH surveys, sample size requirements may be reduced through the incorporation of multiple responses under a single estimator. PMID:19144115

  7. Modeling of Thermal Phase Noise in a Solid Core Photonic Crystal Fiber-Optic Gyroscope

    PubMed Central

    Song, Ningfang; Ma, Kun; Jin, Jing; Teng, Fei; Cai, Wei

    2017-01-01

    A theoretical model of the thermal phase noise in a square-wave modulated solid core photonic crystal fiber-optic gyroscope has been established, and then verified by measurements. The results demonstrate a good agreement between theory and experiment. The contribution of the thermal phase noise to the random walk coefficient of the gyroscope is derived. A fiber coil with 2.8 km length is used in the experimental solid core photonic crystal fiber-optic gyroscope, showing a random walk coefficient of 9.25 × 10−5 deg/h. PMID:29072605

  8. Extensively Parameterized Mutation-Selection Models Reliably Capture Site-Specific Selective Constraint.

    PubMed

    Spielman, Stephanie J; Wilke, Claus O

    2016-11-01

    The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Analysis of multivariate longitudinal kidney function outcomes using generalized linear mixed models.

    PubMed

    Jaffa, Miran A; Gebregziabher, Mulugeta; Jaffa, Ayad A

    2015-06-14

    Renal transplant patients are mandated to have continuous assessment of their kidney function over time to monitor disease progression determined by changes in blood urea nitrogen (BUN), serum creatinine (Cr), and estimated glomerular filtration rate (eGFR). Multivariate analysis of these outcomes that aims at identifying the differential factors that affect disease progression is of great clinical significance. Thus our study aims at demonstrating the application of different joint modeling approaches with random coefficients on a cohort of renal transplant patients and presenting a comparison of their performance through a pseudo-simulation study. The objective of this comparison is to identify the model with best performance and to determine whether accuracy compensates for complexity in the different multivariate joint models. We propose a novel application of multivariate Generalized Linear Mixed Models (mGLMM) to analyze multiple longitudinal kidney function outcomes collected over 3 years on a cohort of 110 renal transplantation patients. The correlated outcomes BUN, Cr, and eGFR and the effect of various covariates such patient's gender, age and race on these markers was determined holistically using different mGLMMs. The performance of the various mGLMMs that encompass shared random intercept (SHRI), shared random intercept and slope (SHRIS), separate random intercept (SPRI) and separate random intercept and slope (SPRIS) was assessed to identify the one that has the best fit and most accurate estimates. A bootstrap pseudo-simulation study was conducted to gauge the tradeoff between the complexity and accuracy of the models. Accuracy was determined using two measures; the mean of the differences between the estimates of the bootstrapped datasets and the true beta obtained from the application of each model on the renal dataset, and the mean of the square of these differences. The results showed that SPRI provided most accurate estimates and did not exhibit any computational or convergence problem. Higher accuracy was demonstrated when the level of complexity increased from shared random coefficient models to the separate random coefficient alternatives with SPRI showing to have the best fit and most accurate estimates.

  10. Revisiting crash spatial heterogeneity: A Bayesian spatially varying coefficients approach.

    PubMed

    Xu, Pengpeng; Huang, Helai; Dong, Ni; Wong, S C

    2017-01-01

    This study was performed to investigate the spatially varying relationships between crash frequency and related risk factors. A Bayesian spatially varying coefficients model was elaborately introduced as a methodological alternative to simultaneously account for the unstructured and spatially structured heterogeneity of the regression coefficients in predicting crash frequencies. The proposed method was appealing in that the parameters were modeled via a conditional autoregressive prior distribution, which involved a single set of random effects and a spatial correlation parameter with extreme values corresponding to pure unstructured or pure spatially correlated random effects. A case study using a three-year crash dataset from the Hillsborough County, Florida, was conducted to illustrate the proposed model. Empirical analysis confirmed the presence of both unstructured and spatially correlated variations in the effects of contributory factors on severe crash occurrences. The findings also suggested that ignoring spatially structured heterogeneity may result in biased parameter estimates and incorrect inferences, while assuming the regression coefficients to be spatially clustered only is probably subject to the issue of over-smoothness. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A varying-coefficient method for analyzing longitudinal clinical trials data with nonignorable dropout

    PubMed Central

    Forster, Jeri E.; MaWhinney, Samantha; Ball, Erika L.; Fairclough, Diane

    2011-01-01

    Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223

  12. Weibull crack density coefficient for polydimensional stress states

    NASA Technical Reports Server (NTRS)

    Gross, Bernard; Gyekenyesi, John P.

    1989-01-01

    A structural ceramic analysis and reliability evaluation code has recently been developed encompassing volume and surface flaw induced fracture, modeled by the two-parameter Weibull probability density function. A segment of the software involves computing the Weibull polydimensional stress state crack density coefficient from uniaxial stress experimental fracture data. The relationship of the polydimensional stress coefficient to the uniaxial stress coefficient is derived for a shear-insensitive material with a random surface flaw population.

  13. Permeability of model porous medium formed by random discs

    NASA Astrophysics Data System (ADS)

    Gubaidullin, A. A.; Gubkin, A. S.; Igoshin, D. E.; Ignatev, P. A.

    2018-03-01

    Two-dimension model of the porous medium with skeleton of randomly located overlapping discs is proposed. The geometry and computational grid are built in open package Salome. Flow of Newtonian liquid in longitudinal and transverse directions is calculated and its flow rate is defined. The numerical solution of the Navier-Stokes equations for a given pressure drop at the boundaries of the area is realized in the open package OpenFOAM. Calculated value of flow rate is used for defining of permeability coefficient on the base of Darcy law. For evaluating of representativeness of computational domain the permeability coefficients in longitudinal and transverse directions are compered.

  14. Genetic parameters for stayability to consecutive calvings in Zebu cattle.

    PubMed

    Silva, D O; Santana, M L; Ayres, D R; Menezes, G R O; Silva, L O C; Nobre, P R C; Pereira, R J

    2017-12-22

    Longer-lived cows tend to be more profitable and the stayability trait is a selection criterion correlated to longevity. An alternative to the traditional approach to evaluate stayability is its definition based on consecutive calvings, whose main advantage is the more accurate evaluation of young bulls. However, no study using this alternative approach has been conducted for Zebu breeds. Therefore, the objective of this study was to compare linear random regression models to fit stayability to consecutive calvings of Guzerá, Nelore and Tabapuã cows and to estimate genetic parameters for this trait in the respective breeds. Data up to the eighth calving were used. The models included the fixed effects of age at first calving and year-season of birth of the cow and the random effects of contemporary group, additive genetic, permanent environmental and residual. Random regressions were modeled by orthogonal Legendre polynomials of order 1 to 4 (2 to 5 coefficients) for contemporary group, additive genetic and permanent environmental effects. Using Deviance Information Criterion as the selection criterion, the model with 4 regression coefficients for each effect was the most adequate for the Nelore and Tabapuã breeds and the model with 5 coefficients is recommended for the Guzerá breed. For Guzerá, heritabilities ranged from 0.05 to 0.08, showing a quadratic trend with a peak between the fourth and sixth calving. For the Nelore and Tabapuã breeds, the estimates ranged from 0.03 to 0.07 and from 0.03 to 0.08, respectively, and increased with increasing calving number. The additive genetic correlations exhibited a similar trend among breeds and were higher for stayability between closer calvings. Even between more distant calvings (second v. eighth), stayability showed a moderate to high genetic correlation, which was 0.77, 0.57 and 0.79 for the Guzerá, Nelore and Tabapuã breeds, respectively. For Guzerá, when the models with 4 or 5 regression coefficients were compared, the rank correlations between predicted breeding values for the intercept were always higher than 0.99, indicating the possibility of practical application of the least parameterized model. In conclusion, the model with 4 random regression coefficients is recommended for the genetic evaluation of stayability to consecutive calvings in Zebu cattle.

  15. Chain Pooling modeling selection as developed for the statistical analysis of a rotor burst protection experiment

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1977-01-01

    As many as three iterated statistical model deletion procedures were considered for an experiment. Population model coefficients were chosen to simulate a saturated 2 to the 4th power experiment having an unfavorable distribution of parameter values. Using random number studies, three model selection strategies were developed, namely, (1) a strategy to be used in anticipation of large coefficients of variation, approximately 65 percent, (2) a strategy to be sued in anticipation of small coefficients of variation, 4 percent or less, and (3) a security regret strategy to be used in the absence of such prior knowledge.

  16. Modeling Heterogeneity in Relationships between Initial Status and Rates of Change: Treating Latent Variable Regression Coefficients as Random Coefficients in a Three-Level Hierarchical Model

    ERIC Educational Resources Information Center

    Choi, Kilchan; Seltzer, Michael

    2010-01-01

    In studies of change in education and numerous other fields, interest often centers on how differences in the status of individuals at the start of a period of substantive interest relate to differences in subsequent change. In this article, the authors present a fully Bayesian approach to estimating three-level Hierarchical Models in which latent…

  17. Generalized linear mixed models with varying coefficients for longitudinal data.

    PubMed

    Zhang, Daowen

    2004-03-01

    The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.

  18. Firm-Related Training Tracks: A Random Effects Ordered Probit Model

    ERIC Educational Resources Information Center

    Groot, Wim; van den Brink, Henriette Maassen

    2003-01-01

    A random effects ordered response model of training is estimated to analyze the existence of training tracks and time varying coefficients in training frequency. Two waves of a Dutch panel survey of workers are used covering the period 1992-1996. The amount of training received by workers increased during the period 1994-1996 compared to…

  19. Asymptotic Effect of Misspecification in the Random Part of the Multilevel Model

    ERIC Educational Resources Information Center

    Berkhof, Johannes; Kampen, Jarl Kennard

    2004-01-01

    The authors examine the asymptotic effect of omitting a random coefficient in the multilevel model and derive expressions for the change in (a) the variance components estimator and (b) the estimated variance of the fixed effects estimator. They apply the method of moments, which yields a closed form expression for the omission effect. In…

  20. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  1. Single-image super-resolution based on Markov random field and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai

    2011-04-01

    Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.

  2. Breeding value accuracy estimates for growth traits using random regression and multi-trait models in Nelore cattle.

    PubMed

    Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G

    2011-06-28

    We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.

  3. Electromagnetic wave extinction within a forested canopy

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Fung, A. K.

    1989-01-01

    A forested canopy is modeled by a collection of randomly oriented finite-length cylinders shaded by randomly oriented and distributed disk- or needle-shaped leaves. For a plane wave exciting the forested canopy, the extinction coefficient is formulated in terms of the extinction cross sections (ECSs) in the local frame of each forest component and the Eulerian angles of orientation (used to describe the orientation of each component). The ECSs in the local frame for the finite-length cylinders used to model the branches are obtained by using the forward-scattering theorem. ECSs in the local frame for the disk- and needle-shaped leaves are obtained by the summation of the absorption and scattering cross-sections. The behavior of the extinction coefficients with the incidence angle is investigated numerically for both deciduous and coniferous forest. The dependencies of the extinction coefficients on the orientation of the leaves are illustrated numerically.

  4. Active microwave remote sensing of an anisotropic random medium layer

    NASA Technical Reports Server (NTRS)

    Lee, J. K.; Kong, J. A.

    1985-01-01

    A two-layer anisotropic random medium model has been developed to study the active remote sensing of the earth. The dyadic Green's function for a two-layer anisotropic medium is developed and used in conjunction with the first-order Born approximation to calculate the backscattering coefficients. It is shown that strong cross-polarization occurs in the single scattering process and is indispensable in the interpretation of radar measurements of sea ice at different frequencies, polarizations, and viewing angles. The effects of anisotropy on the angular responses of backscattering coefficients are also illustrated.

  5. Identifying Bearing Rotodynamic Coefficients Using an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Miller, Brad A.; Howard, Samuel A.

    2008-01-01

    An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter's performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.

  6. Failure tolerance of spike phase synchronization in coupled neural networks

    NASA Astrophysics Data System (ADS)

    Jalili, Mahdi

    2011-09-01

    Neuronal synchronization plays an important role in the various functionality of nervous system such as binding, cognition, information processing, and computation. In this paper, we investigated how random and intentional failures in the nodes of a network influence its phase synchronization properties. We considered both artificially constructed networks using models such as preferential attachment, Watts-Strogatz, and Erdős-Rényi as well as a number of real neuronal networks. The failure strategy was either random or intentional based on properties of the nodes such as degree, clustering coefficient, betweenness centrality, and vulnerability. Hindmarsh-Rose model was considered as the mathematical model for the individual neurons, and the phase synchronization of the spike trains was monitored as a function of the percentage/number of removed nodes. The numerical simulations were supplemented by considering coupled non-identical Kuramoto oscillators. Failures based on the clustering coefficient, i.e., removing the nodes with high values of the clustering coefficient, had the least effect on the spike synchrony in all of the networks. This was followed by errors where the nodes were removed randomly. However, the behavior of the other three attack strategies was not uniform across the networks, and different strategies were the most influential in different network structure.

  7. Damage/fault diagnosis in an operating wind turbine under uncertainty via a vibration response Gaussian mixture random coefficient model based framework

    NASA Astrophysics Data System (ADS)

    Avendaño-Valencia, Luis David; Fassois, Spilios D.

    2017-07-01

    The study focuses on vibration response based health monitoring for an operating wind turbine, which features time-dependent dynamics under environmental and operational uncertainty. A Gaussian Mixture Model Random Coefficient (GMM-RC) model based Structural Health Monitoring framework postulated in a companion paper is adopted and assessed. The assessment is based on vibration response signals obtained from a simulated offshore 5 MW wind turbine. The non-stationarity in the vibration signals originates from the continually evolving, due to blade rotation, inertial properties, as well as the wind characteristics, while uncertainty is introduced by random variations of the wind speed within the range of 10-20 m/s. Monte Carlo simulations are performed using six distinct structural states, including the healthy state and five types of damage/fault in the tower, the blades, and the transmission, with each one of them characterized by four distinct levels. Random vibration response modeling and damage diagnosis are illustrated, along with pertinent comparisons with state-of-the-art diagnosis methods. The results demonstrate consistently good performance of the GMM-RC model based framework, offering significant performance improvements over state-of-the-art methods. Most damage types and levels are shown to be properly diagnosed using a single vibration sensor.

  8. Stochastic Resonance and Safe Basin of Single-Walled Carbon Nanotubes with Strongly Nonlinear Stiffness under Random Magnetic Field.

    PubMed

    Xu, Jia; Li, Chao; Li, Yiran; Lim, Chee Wah; Zhu, Zhiwen

    2018-05-04

    In this paper, a kind of single-walled carbon nanotube nonlinear model is developed and the strongly nonlinear dynamic characteristics of such carbon nanotubes subjected to random magnetic field are studied. The nonlocal effect of the microstructure is considered based on Eringen’s differential constitutive model. The natural frequency of the strongly nonlinear dynamic system is obtained by the energy function method, the drift coefficient and the diffusion coefficient are verified. The stationary probability density function of the system dynamic response is given and the fractal boundary of the safe basin is provided. Theoretical analysis and numerical simulation show that stochastic resonance occurs when varying the random magnetic field intensity. The boundary of safe basin has fractal characteristics and the area of safe basin decreases when the intensity of the magnetic field permeability increases.

  9. Semi-analytical Model for Estimating Absorption Coefficients of Optically Active Constituents in Coastal Waters

    NASA Astrophysics Data System (ADS)

    Wang, D.; Cui, Y.

    2015-12-01

    The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model

  10. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  11. The Bayesian group lasso for confounded spatial data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.

    2017-01-01

    Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.

  12. A new neural network model for solving random interval linear programming problems.

    PubMed

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Magnetic field line random walk in models and simulations of reduced magnetohydrodynamic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snodin, A. P.; Ruffolo, D.; Oughton, S.

    2013-12-10

    The random walk of magnetic field lines is examined numerically and analytically in the context of reduced magnetohydrodynamic (RMHD) turbulence, which provides a useful description of plasmas dominated by a strong mean field, such as in the solar corona. A recently developed non-perturbative theory of magnetic field line diffusion is compared with the diffusion coefficients obtained by accurate numerical tracing of magnetic field lines for both synthetic models and direct numerical simulations of RMHD. Statistical analysis of an ensemble of trajectories confirms the applicability of the theory, which very closely matches the numerical field line diffusion coefficient as a functionmore » of distance z along the mean magnetic field for a wide range of the Kubo number R. This theory employs Corrsin's independence hypothesis, sometimes thought to be valid only at low R. However, the results demonstrate that it works well up to R = 10, both for a synthetic RMHD model and an RMHD simulation. The numerical results from the RMHD simulation are compared with and without phase randomization, demonstrating a clear effect of coherent structures on the field line random walk for a very low Kubo number.« less

  14. A stochastic model for density-dependent microwave Snow- and Graupel scattering coefficients of the NOAA JCSDA community radiative transfer model

    NASA Astrophysics Data System (ADS)

    Stegmann, Patrick G.; Tang, Guanglin; Yang, Ping; Johnson, Benjamin T.

    2018-05-01

    A structural model is developed for the single-scattering properties of snow and graupel particles with a strongly heterogeneous morphology and an arbitrary variable mass density. This effort is aimed to provide a mechanism to consider particle mass density variation in the microwave scattering coefficients implemented in the Community Radiative Transfer Model (CRTM). The stochastic model applies a bicontinuous random medium algorithm to a simple base shape and uses the Finite-Difference-Time-Domain (FDTD) method to compute the single-scattering properties of the resulting complex morphology.

  15. Properties of a new small-world network with spatially biased random shortcuts

    NASA Astrophysics Data System (ADS)

    Matsuzawa, Ryo; Tanimoto, Jun; Fukuda, Eriko

    2017-11-01

    This paper introduces a small-world (SW) network with a power-law distance distribution that differs from conventional models in that it uses completely random shortcuts. By incorporating spatial constraints, we analyze the divergence of the proposed model from conventional models in terms of fundamental network properties such as clustering coefficient, average path length, and degree distribution. We find that when the spatial constraint more strongly prohibits a long shortcut, the clustering coefficient is improved and the average path length increases. We also analyze the spatial prisoner's dilemma (SPD) games played on our new SW network in order to understand its dynamical characteristics. Depending on the basis graph, i.e., whether it is a one-dimensional ring or a two-dimensional lattice, and the parameter controlling the prohibition of long-distance shortcuts, the emergent results can vastly differ.

  16. Scavenging and recombination kinetics in a radiation spur: The successive ordered scavenging events

    NASA Astrophysics Data System (ADS)

    Al-Samra, Eyad H.; Green, Nicholas J. B.

    2018-03-01

    This study describes stochastic models to investigate the successive ordered scavenging events in a spur of four radicals, a model system based on a radiation spur. Three simulation models have been developed to obtain the probabilities of the ordered scavenging events: (i) a Monte Carlo random flight (RF) model, (ii) hybrid simulations in which the reaction rate coefficient is used to generate scavenging times for the radicals and (iii) the independent reaction times (IRT) method. The results of these simulations are found to be in agreement with one another. In addition, a detailed master equation treatment is also presented, and used to extract simulated rate coefficients of the ordered scavenging reactions from the RF simulations. These rate coefficients are transient, the rate coefficients obtained for subsequent reactions are effectively equal, and in reasonable agreement with the simple correction for competition effects that has recently been proposed.

  17. Interpretation of diffusion coefficients in nanostructured materials from random walk numerical simulation.

    PubMed

    Anta, Juan A; Mora-Seró, Iván; Dittrich, Thomas; Bisquert, Juan

    2008-08-14

    We make use of the numerical simulation random walk (RWNS) method to compute the "jump" diffusion coefficient of electrons in nanostructured materials via mean-square displacement. First, a summary of analytical results is given that relates the diffusion coefficient obtained from RWNS to those in the multiple-trapping (MT) and hopping models. Simulations are performed in a three-dimensional lattice of trap sites with energies distributed according to an exponential distribution and with a step-function distribution centered at the Fermi level. It is observed that once the stationary state is reached, the ensemble of particles follow Fermi-Dirac statistics with a well-defined Fermi level. In this stationary situation the diffusion coefficient obeys the theoretical predictions so that RWNS effectively reproduces the MT model. Mobilities can be also computed when an electrical bias is applied and they are observed to comply with the Einstein relation when compared with steady-state diffusion coefficients. The evolution of the system towards the stationary situation is also studied. When the diffusion coefficients are monitored along simulation time a transition from anomalous to trap-limited transport is observed. The nature of this transition is discussed in terms of the evolution of electron distribution and the Fermi level. All these results will facilitate the use of RW simulation and related methods to interpret steady-state as well as transient experimental techniques.

  18. Identifying Bearing Rotordynamic Coefficients using an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Miller, Brad A.; Howard, Samuel A.

    2008-01-01

    An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor-dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter s performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor-bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.

  19. Migration of lymphocytes on fibronectin-coated surfaces: temporal evolution of migratory parameters

    NASA Technical Reports Server (NTRS)

    Bergman, A. J.; Zygourakis, K.; McIntire, L. V. (Principal Investigator)

    1999-01-01

    Lymphocytes typically interact with implanted biomaterials through adsorbed exogenous proteins. To provide a more complete characterization of these interactions, analysis of lymphocyte migration on adsorbed extracellular matrix proteins must accompany the commonly performed adhesion studies. We report here a comparison of the migratory and adhesion behavior of Jurkat cells (a T lymphoblastoid cell line) on tissue culture treated and untreated polystyrene surfaces coated with various concentrations of fibronectin. The average speed of cell locomotion showed a biphasic response to substrate adhesiveness for cells migrating on untreated polystyrene and a monotonic decrease for cells migrating on tissue culture-treated polystyrene. A modified approach to the persistent random walk model was implemented to determine the time dependence of cell migration parameters. The random motility coefficient showed significant increases with time when cells migrated on tissue culture-treated polystyrene surfaces, while it remained relatively constant for experiments with untreated polystyrene plates. Finally, a cell migration computer model was developed to verify our modified persistent random walk analysis. Simulation results suggest that our experimental data were consistent with temporally increasing random motility coefficients.

  20. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    ERIC Educational Resources Information Center

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  1. Application of random effects to the study of resource selection by animals

    USGS Publications Warehouse

    Gillies, C.S.; Hebblewhite, M.; Nielsen, S.E.; Krawchuk, M.A.; Aldridge, Cameron L.; Frair, J.L.; Saher, D.J.; Stevens, C.E.; Jerde, C.L.

    2006-01-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence.2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability.3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed.4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects.5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection.6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  2. Application of random effects to the study of resource selection by animals.

    PubMed

    Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L

    2006-07-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  3. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    PubMed

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  4. Determination of Nonlinear Stiffness Coefficients for Finite Element Models with Application to the Random Vibration Problem

    NASA Technical Reports Server (NTRS)

    Muravyov, Alexander A.

    1999-01-01

    In this paper, a method for obtaining nonlinear stiffness coefficients in modal coordinates for geometrically nonlinear finite-element models is developed. The method requires application of a finite-element program with a geometrically non- linear static capability. The MSC/NASTRAN code is employed for this purpose. The equations of motion of a MDOF system are formulated in modal coordinates. A set of linear eigenvectors is used to approximate the solution of the nonlinear problem. The random vibration problem of the MDOF nonlinear system is then considered. The solutions obtained by application of two different versions of a stochastic linearization technique are compared with linear and exact (analytical) solutions in terms of root-mean-square (RMS) displacements and strains for a beam structure.

  5. Investment portfolio of a pension fund: Stochastic model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosch-Princep, M.; Fontanals-Albiol, H.

    1994-12-31

    This paper presents a stochastic programming model that aims at getting the optimal investment portfolio of a Pension Funds. The model has been designed bearing in mind the liabilities of the Funds to its members. The essential characteristic of the objective function and the constraints is the randomness of the coefficients and the right hand side of the constraints, so it`s necessary to use techniques of stochastic mathematical programming to get information about the amount of money that should be assigned to each sort of investment. It`s important to know the risky attitude of the person that has to takemore » decisions towards running risks. It incorporates the relation between the different coefficients of the objective function and constraints of each period of temporal horizon, through lineal and discrete random processes. Likewise, it includes the hypotheses that are related to Spanish law concerning the subject of Pension Funds.« less

  6. Progress in radar snow research. [Brookings, South Dakota

    NASA Technical Reports Server (NTRS)

    Stiles, W. H.; Ulaby, F. T.; Fung, A. K.; Aslam, A.

    1981-01-01

    Multifrequency measurements of the radar backscatter from snow-covered terrain were made at several sites in Brookings, South Dakota, during the month of March of 1979. The data are used to examine the response of the scattering coefficient to the following parameters: (1) snow surface roughness, (2) snow liquid water content, and (3) snow water equivalent. The results indicate that the scattering coefficient is insensitive to snow surface roughness if the snow is drv. For wet snow, however, surface roughness can have a strong influence on the magnitude of the scattering coefficient. These observations confirm the results predicted by a theoretical model that describes the snow as a volume of Rayleig scatterers, bounded by a Gaussian random surface. In addition, empirical models were developed to relate the scattering coefficient to snow liquid water content and the dependence of the scattering coefficient on water equivalent was evaluated for both wet and dry snow conditions.

  7. Physical interrelation between Fokker-Planck and random walk models with application to Coulomb interactions.

    NASA Technical Reports Server (NTRS)

    Englert, G. W.

    1971-01-01

    A model of the random walk is formulated to allow a simple computing procedure to replace the difficult problem of solution of the Fokker-Planck equation. The step sizes and probabilities of taking steps in the various directions are expressed in terms of Fokker-Planck coefficients. Application is made to many particle systems with Coulomb interactions. The relaxation of a highly peaked velocity distribution of particles to equilibrium conditions is illustrated.

  8. Random-growth urban model with geographical fitness

    NASA Astrophysics Data System (ADS)

    Kii, Masanobu; Akimoto, Keigo; Doi, Kenji

    2012-12-01

    This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.

  9. Stochastic field-line wandering in magnetic turbulence with shear. I. Quasi-linear theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalchi, A.; Negrea, M.; Petrisor, I.

    2016-07-15

    We investigate the random walk of magnetic field lines in magnetic turbulence with shear. In the first part of the series, we develop a quasi-linear theory in order to compute the diffusion coefficient of magnetic field lines. We derive general formulas for the diffusion coefficients in the different directions of space. We like to emphasize that we expect that quasi-linear theory is only valid if the so-called Kubo number is small. We consider two turbulence models as examples, namely, a noisy slab model as well as a Gaussian decorrelation model. For both models we compute the field line diffusion coefficientsmore » and we show how they depend on the aforementioned Kubo number as well as a shear parameter. It is demonstrated that the shear effect reduces all field line diffusion coefficients.« less

  10. Remote sensing of Earth terrain

    NASA Technical Reports Server (NTRS)

    Kong, Jin AU

    1987-01-01

    Earth terrain covers were modeled as random media characterized by different dielectric constants and correlation functions. In order to model sea ice with brine inclusions and vegetation with row structures, the random medium is assumed to be anisotropic. A three layer model is used to simulate a vegetation field or a snow covered ice field with the top layer being snow or leaves, the middle layer being ice or trunks, and the bottom layer being sea water or ground. The strong fluctuation theory with the distorted Born approximation is applied to the solution of the radar backscattering coefficients.

  11. Passive microwave remote sensing of an anisotropic random-medium layer

    NASA Technical Reports Server (NTRS)

    Lee, J. K.; Kong, J. A.

    1985-01-01

    The principle of reciprocity is invoked to calculate the brightness temperatures for passive microwave remote sensing of a two-layer anisotropic random medium. The bistatic scattering coefficients are first computed with the Born approximation and then integrated over the upper hemisphere to be subtracted from unity, in order to obtain the emissivity for the random-medium layer. The theoretical results are illustrated by plotting the emissivities as functions of viewing angles and polarizations. They are used to interpret remote sgnsing data obtained from vegetation canopy where the anisotropic random-medium model applies. Field measurements with corn stalks arranged in various configurations with preferred azimuthal directions are successfully interpreted with this model.

  12. Method of model reduction and multifidelity models for solute transport in random layered porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Tartakovsky, Alexandre M.

    This work presents a hierarchical model for solute transport in bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the hierarchical model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. We observe that velocity fluctuations enhance dispersion in a non-monotonic fashion: the dispersionmore » initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity. Maximum enhancement can be obtained at the correlation length about 0.25 the size of the porous media perpendicular to flow.« less

  13. Plasma fluctuations as Markovian noise.

    PubMed

    Li, B; Hazeltine, R D; Gentle, K W

    2007-12-01

    Noise theory is used to study the correlations of stationary Markovian fluctuations that are homogeneous and isotropic in space. The relaxation of the fluctuations is modeled by the diffusion equation. The spatial correlations of random fluctuations are modeled by the exponential decay. Based on these models, the temporal correlations of random fluctuations, such as the correlation function and the power spectrum, are calculated. We find that the diffusion process can give rise to the decay of the correlation function and a broad frequency spectrum of random fluctuations. We also find that the transport coefficients may be estimated by the correlation length and the correlation time. The theoretical results are compared with the observed plasma density fluctuations from the tokamak and helimak experiments.

  14. Superimposed Code Theoretic Analysis of Deoxyribonucleic Acid (DNA) Codes and DNA Computing

    DTIC Science & Technology

    2010-01-01

    partitioned by font type) of sequences are allowed to be in each position (e.g., Arial = position 0, Comic = position 1, etc. ) and within each collection...movement was modeled by a Brownian motion 3 dimensional random walk. The one dimensional diffusion coefficient D for the ellipsoid shape with 3...temperature, kB is Boltzmann’s constant, and η is the viscosity of the medium. The random walk motion is modeled by assuming the oligo is on a three

  15. Analyzing degradation data with a random effects spline regression model

    DOE PAGES

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    2017-03-17

    This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.

  16. Analyzing degradation data with a random effects spline regression model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.

  17. A model for nematode locomotion in soil

    USGS Publications Warehouse

    Hunt, H. William; Wall, Diana H.; DeCrappeo, Nicole; Brenner, John S.

    2001-01-01

    Locomotion of nematodes in soil is important for both practical and theoretical reasons. We constructed a model for rate of locomotion. The first model component is a simple simulation of nematode movement among finite cells by both random and directed behaviours. Optimisation procedures were used to fit the simulation output to data from published experiments on movement along columns of soil or washed sand, and thus to estimate the values of the model's movement coefficients. The coefficients then provided an objective means to compare rates of locomotion among studies done under different experimental conditions. The second component of the model is an equation to predict the movement coefficients as a function of controlling factors that have been addressed experimentally: soil texture, bulk density, water potential, temperature, trophic group of nematode, presence of an attractant or physical gradient and the duration of the experiment. Parameters of the equation were estimated by optimisation to achieve a good fit to the estimated movement coefficients. Bulk density, which has been reported in a minority of published studies, is predicted to have an important effect on rate of locomotion, at least in fine-textured soils. Soil sieving, which appears to be a universal practice in laboratory studies of nematode movement, is predicted to negatively affect locomotion. Slower movement in finer textured soils would be expected to increase isolation among local populations, and thus to promote species richness. Future additions to the model that might improve its utility include representing heterogeneity within populations in rate of movement, development of gradients of chemical attractants, trade-offs between random and directed components of movement, species differences in optimal temperature and water potential, and interactions among factors controlling locomotion.

  18. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    PubMed

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Analyzing crash frequency in freeway tunnels: A correlated random parameters approach.

    PubMed

    Hou, Qinzhong; Tarko, Andrew P; Meng, Xianghai

    2018-02-01

    The majority of past road safety studies focused on open road segments while only a few focused on tunnels. Moreover, the past tunnel studies produced some inconsistent results about the safety effects of the traffic patterns, the tunnel design, and the pavement conditions. The effects of these conditions therefore remain unknown, especially for freeway tunnels in China. The study presented in this paper investigated the safety effects of these various factors utilizing a four-year period (2009-2012) of data as well as three models: 1) a random effects negative binomial model (RENB), 2) an uncorrelated random parameters negative binomial model (URPNB), and 3) a correlated random parameters negative binomial model (CRPNB). Of these three, the results showed that the CRPNB model provided better goodness-of-fit and offered more insights into the factors that contribute to tunnel safety. The CRPNB was not only able to allocate the part of the otherwise unobserved heterogeneity to the individual model parameters but also was able to estimate the cross-correlations between these parameters. Furthermore, the study results showed that traffic volume, tunnel length, proportion of heavy trucks, curvature, and pavement rutting were associated with higher frequencies of traffic crashes, while the distance to the tunnel wall, distance to the adjacent tunnel, distress ratio, International Roughness Index (IRI), and friction coefficient were associated with lower crash frequencies. In addition, the effects of the heterogeneity of the proportion of heavy trucks, the curvature, the rutting depth, and the friction coefficient were identified and their inter-correlations were analyzed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Data-driven analysis for the temperature and momentum dependence of the heavy-quark diffusion coefficient in relativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Xu, Yingru; Bernhard, Jonah E.; Bass, Steffen A.; Nahrgang, Marlene; Cao, Shanshan

    2018-01-01

    By applying a Bayesian model-to-data analysis, we estimate the temperature and momentum dependence of the heavy quark diffusion coefficient in an improved Langevin framework. The posterior range of the diffusion coefficient is obtained by performing a Markov chain Monte Carlo random walk and calibrating on the experimental data of D -meson RAA and v2 in three different collision systems at the Relativistic Heavy-Ion Collidaer (RHIC) and the Large Hadron Collider (LHC): Au-Au collisions at 200 GeV and Pb-Pb collisions at 2.76 and 5.02 TeV. The spatial diffusion coefficient is found to be consistent with lattice QCD calculations and comparable with other models' estimation. We demonstrate the capability of our improved Langevin model to simultaneously describe the RAA and v2 at both RHIC and the LHC energies, as well as the higher order flow coefficient such as D meson v3. We show that by applying a Bayesian analysis, we are able to quantitatively and systematically study the heavy flavor dynamics in heavy-ion collisions.

  1. How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level.

    PubMed

    Moerbeek, Mirjam; van Schie, Sander

    2016-07-11

    The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.

  2. [How to fit and interpret multilevel models using SPSS].

    PubMed

    Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael

    2007-05-01

    Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.

  3. Rumor Diffusion in an Interests-Based Dynamic Social Network

    PubMed Central

    Mao, Xinjun; Guessoum, Zahia; Zhou, Huiping

    2013-01-01

    To research rumor diffusion in social friend network, based on interests, a dynamic friend network is proposed, which has the characteristics of clustering and community, and a diffusion model is also proposed. With this friend network and rumor diffusion model, based on the zombie-city model, some simulation experiments to analyze the characteristics of rumor diffusion in social friend networks have been conducted. The results show some interesting observations: (1) positive information may evolve to become a rumor through the diffusion process that people may modify the information by word of mouth; (2) with the same average degree, a random social network has a smaller clustering coefficient and is more beneficial for rumor diffusion than the dynamic friend network; (3) a rumor is spread more widely in a social network with a smaller global clustering coefficient than in a social network with a larger global clustering coefficient; and (4) a network with a smaller clustering coefficient has a larger efficiency. PMID:24453911

  4. Rumor diffusion in an interests-based dynamic social network.

    PubMed

    Tang, Mingsheng; Mao, Xinjun; Guessoum, Zahia; Zhou, Huiping

    2013-01-01

    To research rumor diffusion in social friend network, based on interests, a dynamic friend network is proposed, which has the characteristics of clustering and community, and a diffusion model is also proposed. With this friend network and rumor diffusion model, based on the zombie-city model, some simulation experiments to analyze the characteristics of rumor diffusion in social friend networks have been conducted. The results show some interesting observations: (1) positive information may evolve to become a rumor through the diffusion process that people may modify the information by word of mouth; (2) with the same average degree, a random social network has a smaller clustering coefficient and is more beneficial for rumor diffusion than the dynamic friend network; (3) a rumor is spread more widely in a social network with a smaller global clustering coefficient than in a social network with a larger global clustering coefficient; and (4) a network with a smaller clustering coefficient has a larger efficiency.

  5. Monte Carlo calibration of avalanches described as Coulomb fluid flows.

    PubMed

    Ancey, Christophe

    2005-07-15

    The idea that snow avalanches might behave as granular flows, and thus be described as Coulomb fluid flows, came up very early in the scientific study of avalanches, but it is not until recently that field evidence has been provided that demonstrates the reliability of this idea. This paper aims to specify the bulk frictional behaviour of snow avalanches by seeking a universal friction law. Since the bulk friction coefficient cannot be measured directly in the field, the friction coefficient must be calibrated by adjusting the model outputs to closely match the recorded data. Field data are readily available but are of poor quality and accuracy. We used Bayesian inference techniques to specify the model uncertainty relative to data uncertainty and to robustly and efficiently solve the inverse problem. A sample of 173 events taken from seven paths in the French Alps was used. The first analysis showed that the friction coefficient behaved as a random variable with a smooth and bell-shaped empirical distribution function. Evidence was provided that the friction coefficient varied with the avalanche volume, but any attempt to adjust a one-to-one relationship relating friction to volume produced residual errors that could be as large as three times the maximum uncertainty of field data. A tentative universal friction law is proposed: the friction coefficient is a random variable, the distribution of which can be approximated by a normal distribution with a volume-dependent mean.

  6. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.

  7. Genetic parameters for growth characteristics of free-range chickens under univariate random regression models.

    PubMed

    Rovadoscki, Gregori A; Petrini, Juliana; Ramirez-Diaz, Johanna; Pertile, Simone F N; Pertille, Fábio; Salvian, Mayara; Iung, Laiza H S; Rodriguez, Mary Ana P; Zampar, Aline; Gaya, Leila G; Carvalho, Rachel S B; Coelho, Antonio A D; Savino, Vicente J M; Coutinho, Luiz L; Mourão, Gerson B

    2016-09-01

    Repeated measures from the same individual have been analyzed by using repeatability and finite dimension models under univariate or multivariate analyses. However, in the last decade, the use of random regression models for genetic studies with longitudinal data have become more common. Thus, the aim of this research was to estimate genetic parameters for body weight of four experimental chicken lines by using univariate random regression models. Body weight data from hatching to 84 days of age (n = 34,730) from four experimental free-range chicken lines (7P, Caipirão da ESALQ, Caipirinha da ESALQ and Carijó Barbado) were used. The analysis model included the fixed effects of contemporary group (gender and rearing system), fixed regression coefficients for age at measurement, and random regression coefficients for permanent environmental effects and additive genetic effects. Heterogeneous variances for residual effects were considered, and one residual variance was assigned for each of six subclasses of age at measurement. Random regression curves were modeled by using Legendre polynomials of the second and third orders, with the best model chosen based on the Akaike Information Criterion, Bayesian Information Criterion, and restricted maximum likelihood. Multivariate analyses under the same animal mixed model were also performed for the validation of the random regression models. The Legendre polynomials of second order were better for describing the growth curves of the lines studied. Moderate to high heritabilities (h(2) = 0.15 to 0.98) were estimated for body weight between one and 84 days of age, suggesting that selection for body weight at all ages can be used as a selection criteria. Genetic correlations among body weight records obtained through multivariate analyses ranged from 0.18 to 0.96, 0.12 to 0.89, 0.06 to 0.96, and 0.28 to 0.96 in 7P, Caipirão da ESALQ, Caipirinha da ESALQ, and Carijó Barbado chicken lines, respectively. Results indicate that genetic gain for body weight can be achieved by selection. Also, selection for body weight at 42 days of age can be maintained as a selection criterion. © 2016 Poultry Science Association Inc.

  8. The relationship between multilevel models and non-parametric multilevel mixture models: Discrete approximation of intraclass correlation, random coefficient distributions, and residual heteroscedasticity.

    PubMed

    Rights, Jason D; Sterba, Sonya K

    2016-11-01

    Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non-parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non-parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non-standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed. © 2016 The British Psychological Society.

  9. Note on the coefficient of variations of neuronal spike trains.

    PubMed

    Lengler, Johannes; Steger, Angelika

    2017-08-01

    It is known that many neurons in the brain show spike trains with a coefficient of variation (CV) of the interspike times of approximately 1, thus resembling the properties of Poisson spike trains. Computational studies have been able to reproduce this phenomenon. However, the underlying models were too complex to be examined analytically. In this paper, we offer a simple model that shows the same effect but is accessible to an analytic treatment. The model is a random walk model with a reflecting barrier; we give explicit formulas for the CV in the regime of excess inhibition. We also analyze the effect of probabilistic synapses in our model and show that it resembles previous findings that were obtained by simulation.

  10. Bayesian Estimation of Random Coefficient Dynamic Factor Models

    ERIC Educational Resources Information Center

    Song, Hairong; Ferrer, Emilio

    2012-01-01

    Dynamic factor models (DFMs) have typically been applied to multivariate time series data collected from a single unit of study, such as a single individual or dyad. The goal of DFMs application is to capture dynamics of multivariate systems. When multiple units are available, however, DFMs are not suited to capture variations in dynamics across…

  11. Evolution of the concentration PDF in random environments modeled by global random walk

    NASA Astrophysics Data System (ADS)

    Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter

    2013-04-01

    The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and speeds up the computation by orders of magnitude. The approach is illustrated for the transport of passive scalars in heterogeneous aquifers, with hydraulic conductivity modeled as a random field.

  12. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks

    PubMed Central

    Hewitt, Angela L.; Popa, Laurentiu S.; Pasalar, Siavash; Hendrix, Claudia M.

    2011-01-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of Radj2), followed by position (28 ± 24% of Radj2) and speed (11 ± 19% of Radj2). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower Radj2 values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics. PMID:21795616

  13. Adaptive threshold shearlet transform for surface microseismic data denoising

    NASA Astrophysics Data System (ADS)

    Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan

    2018-06-01

    Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.

  14. Microwave scattering and emission from a half-space anisotropic random medium

    NASA Astrophysics Data System (ADS)

    Mudaliar, Saba; Lee, Jay Kyoon

    1990-12-01

    This paper is a sequel to an earlier paper (Lee and Mudaliar, 1988) where the backscattering coefficients of a half-space anisotropic random medium were obtained. Here the bistatic scattering coefficients are calculated by solving the modified radiative transfer equations under a first-order approximation. The effects of multiple scattering on the results are observed. Emissivities are calculated and compared with those obtained using the Born approximation (single scattering). Several interesting properties of the model are brought to notice using numerical examples. Finally, as an application, the theory is used to interpret the passive remote sensing data of multiyear sea ice in the microwave frequency range. A quite close agreement between theoretical prediction and the measured data is found.

  15. On S.N. Bernstein's derivation of Mendel's Law and 'rediscovery' of the Hardy-Weinberg distribution.

    PubMed

    Stark, Alan; Seneta, Eugene

    2012-04-01

    Around 1923 the soon-to-be famous Soviet mathematician and probabilist Sergei N. Bernstein started to construct an axiomatic foundation of a theory of heredity. He began from the premise of stationarity (constancy of type proportions) from the first generation of offspring. This led him to derive the Mendelian coefficients of heredity. It appears that he had no direct influence on the subsequent development of population genetics. A basic assumption of Bernstein was that parents coupled randomly to produce offspring. This paper shows that a simple model of non-random mating, which nevertheless embodies a feature of the Hardy-Weinberg Law, can produce Mendelian coefficients of heredity while maintaining the population distribution. How W. Johannsen's monograph influenced Bernstein is discussed.

  16. A Small and Slim Coaxial Probe for Single Rice Grain Moisture Sensing

    PubMed Central

    You, Kok Yeow; Mun, Hou Kit; You, Li Ling; Salleh, Jamaliah; Abbas, Zulkifly

    2013-01-01

    A moisture detection of single rice grains using a slim and small open-ended coaxial probe is presented. The coaxial probe is suitable for the nondestructive measurement of moisture values in the rice grains ranging from from 9.5% to 26%. Empirical polynomial models are developed to predict the gravimetric moisture content of rice based on measured reflection coefficients using a vector network analyzer. The relationship between the reflection coefficient and relative permittivity were also created using a regression method and expressed in a polynomial model, whose model coefficients were obtained by fitting the data from Finite Element-based simulation. Besides, the designed single rice grain sample holder and experimental set-up were shown. The measurement of single rice grains in this study is more precise compared to the measurement in conventional bulk rice grains, as the random air gap present in the bulk rice grains is excluded. PMID:23493127

  17. Oscillations and chaos in neural networks: an exactly solvable model.

    PubMed Central

    Wang, L P; Pichler, E E; Ross, J

    1990-01-01

    We consider a randomly diluted higher-order network with noise, consisting of McCulloch-Pitts neurons that interact by Hebbian-type connections. For this model, exact dynamical equations are derived and solved for both parallel and random sequential updating algorithms. For parallel dynamics, we find a rich spectrum of different behaviors including static retrieving and oscillatory and chaotic phenomena in different parts of the parameter space. The bifurcation parameters include first- and second-order neuronal interaction coefficients and a rescaled noise level, which represents the combined effects of the random synaptic dilution, interference between stored patterns, and additional background noise. We show that a marked difference in terms of the occurrence of oscillations or chaos exists between neural networks with parallel and random sequential dynamics. Images PMID:2251287

  18. Influence of inhomogeneous surface heat capacity on the estimation of radiative response coefficients in a two-zone energy balance model

    NASA Astrophysics Data System (ADS)

    Park, Jungmin; Choi, Yong-Sang

    2018-04-01

    Observationally constrained values of the global radiative response coefficient are pivotal to assess the reliability of modeled climate feedbacks. A widely used approach is to measure transient global radiative imbalance related to surface temperature changes. However, in this approach, a potential error in the estimate of radiative response coefficients may arise from surface inhomogeneity in the climate system. We examined this issue theoretically using a simple two-zone energy balance model. Here, we dealt with the potential error by subtracting the prescribed radiative response coefficient from those calculated within the two-zone framework. Each zone was characterized by the different magnitude of the radiative response coefficient and the surface heat capacity, and the dynamical heat transport in the atmosphere between the zones was parameterized as a linear function of the temperature difference between the zones. Then, the model system was forced by randomly generated monthly varying forcing mimicking time-varying forcing like an observation. The repeated simulations showed that inhomogeneous surface heat capacity causes considerable miscalculation (down to -1.4 W m-2 K-1 equivalent to 31.3% of the prescribed value) in the global radiative response coefficient. Also, the dynamical heat transport reduced this miscalculation driven by inhomogeneity of surface heat capacity. Therefore, the estimation of radiative response coefficients using the surface temperature-radiation relation is appropriate for homogeneous surface areas least affected by the exterior.

  19. Electromechanical imitator of antilock braking modes of wheels with pneumatic tire and its application for the runways friction coefficient measurement

    NASA Astrophysics Data System (ADS)

    Putov, A. V.; Kopichev, M. M.; Ignatiev, K. V.; Putov, V. V.; Stotckaia, A. D.

    2017-01-01

    In this paper it is considered a discussion of the technique that realizes a brand new method of runway friction coefficient measurement based upon the proposed principle of measuring wheel braking control for the imitation of antilock braking modes that are close to the real braking modes of the aircraft chassis while landing that are realized by the aircraft anti-skid systems. Also here is the description of the model of towed measuring device that realizes a new technique of runway friction coefficient measuring, based upon the measuring wheel braking control principle. For increasing the repeatability accuracy of electromechanical braking imitation system the sideslip (brake) adaptive control system is proposed. Based upon the Burkhard model and additive random processes several mathematical models were created that describes the friction coefficient arrangement along the airstrip with different qualitative adjectives. Computer models of friction coefficient measuring were designed and first in the world the research of correlation between the friction coefficient measuring results and shape variations, intensity and cycle frequency of the measuring wheel antilock braking modes. The sketch engineering documentation was designed and prototype of the latest generation measuring device is ready to use. The measuring device was tested on the autonomous electromechanical examination laboratory treadmill bench. The experiments approved effectiveness of method of imitation the antilock braking modes for solving the problem of correlation of the runway friction coefficient measuring.

  20. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  1. Higher-order clustering in networks

    NASA Astrophysics Data System (ADS)

    Yin, Hao; Benson, Austin R.; Leskovec, Jure

    2018-05-01

    A fundamental property of complex networks is the tendency for edges to cluster. The extent of the clustering is typically quantified by the clustering coefficient, which is the probability that a length-2 path is closed, i.e., induces a triangle in the network. However, higher-order cliques beyond triangles are crucial to understanding complex networks, and the clustering behavior with respect to such higher-order network structures is not well understood. Here we introduce higher-order clustering coefficients that measure the closure probability of higher-order network cliques and provide a more comprehensive view of how the edges of complex networks cluster. Our higher-order clustering coefficients are a natural generalization of the traditional clustering coefficient. We derive several properties about higher-order clustering coefficients and analyze them under common random graph models. Finally, we use higher-order clustering coefficients to gain new insights into the structure of real-world networks from several domains.

  2. Estimation of real-time runway surface contamination using flight data recorder parameters

    NASA Astrophysics Data System (ADS)

    Curry, Donovan

    Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.

  3. Distributional behavior of diffusion coefficients obtained by single trajectories in annealed transit time model

    NASA Astrophysics Data System (ADS)

    Akimoto, Takuma; Yamamoto, Eiji

    2016-12-01

    Local diffusion coefficients in disordered systems such as spin glass systems and living cells are highly heterogeneous and may change over time. Such a time-dependent and spatially heterogeneous environment results in irreproducibility of single-particle-tracking measurements. Irreproducibility of time-averaged observables has been theoretically studied in the context of weak ergodicity breaking in stochastic processes. Here, we provide rigorous descriptions of equilibrium and non-equilibrium diffusion processes for the annealed transit time model, which is a heterogeneous diffusion model in living cells. We give analytical solutions for the mean square displacement (MSD) and the relative standard deviation of the time-averaged MSD for equilibrium and non-equilibrium situations. We find that the time-averaged MSD grows linearly with time and that the time-averaged diffusion coefficients are intrinsically random (irreproducible) even in the long-time measurements in non-equilibrium situations. Furthermore, the distribution of the time-averaged diffusion coefficients converges to a universal distribution in the sense that it does not depend on initial conditions. Our findings pave the way for a theoretical understanding of distributional behavior of the time-averaged diffusion coefficients in disordered systems.

  4. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    PubMed Central

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  5. An Entropy-Based Measure of Dependence between Two Groups of Random Variables. Research Report. ETS RR-07-20

    ERIC Educational Resources Information Center

    Kong, Nan

    2007-01-01

    In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…

  6. Efficient sampling of complex network with modified random walk strategies

    NASA Astrophysics Data System (ADS)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  7. A Numerical Simulation of Scattering from One-Dimensional Inhomogeneous Dielectric Random Surfaces

    NASA Technical Reports Server (NTRS)

    Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.

    1996-01-01

    In this paper, an efficient numerical solution for the scattering problem of inhomogeneous dielectric rough surfaces is presented. The inhomogeneous dielectric random surface represents a bare soil surface and is considered to be comprised of a large number of randomly positioned dielectric humps of different sizes, shapes, and dielectric constants above an impedance surface. Clods with nonuniform moisture content and rocks are modeled by inhomogeneous dielectric humps and the underlying smooth wet soil surface is modeled by an impedance surface. In this technique, an efficient numerical solution for the constituent dielectric humps over an impedance surface is obtained using Green's function derived by the exact image theory in conjunction with the method of moments. The scattered field from a sample of the rough surface is obtained by summing the scattered fields from all the individual humps of the surface coherently ignoring the effect of multiple scattering between the humps. The statistical behavior of the scattering coefficient sigma(sup 0) is obtained from the calculation of scattered fields of many different realizations of the surface. Numerical results are presented for several different roughnesses and dielectric constants of the random surfaces. The numerical technique is verified by comparing the numerical solution with the solution based on the small perturbation method and the physical optics model for homogeneous rough surfaces. This technique can be used to study the behavior of scattering coefficient and phase difference statistics of rough soil surfaces for which no analytical solution exists.

  8. Choosing the best index for the average score intraclass correlation coefficient.

    PubMed

    Shieh, Gwowen

    2016-09-01

    The intraclass correlation coefficient (ICC)(2) index from a one-way random effects model is widely used to describe the reliability of mean ratings in behavioral, educational, and psychological research. Despite its apparent utility, the essential property of ICC(2) as a point estimator of the average score intraclass correlation coefficient is seldom mentioned. This article considers several potential measures and compares their performance with ICC(2). Analytical derivations and numerical examinations are presented to assess the bias and mean square error of the alternative estimators. The results suggest that more advantageous indices can be recommended over ICC(2) for their theoretical implication and computational ease.

  9. Strategic Use of Random Subsample Replication and a Coefficient of Factor Replicability

    ERIC Educational Resources Information Center

    Katzenmeyer, William G.; Stenner, A. Jackson

    1975-01-01

    The problem of demonstrating replicability of factor structure across random variables is addressed. Procedures are outlined which combine the use of random subsample replication strategies with the correlations between factor score estimates across replicate pairs to generate a coefficient of replicability and confidence intervals associated with…

  10. Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Tao, E-mail: tzhou@lsec.cc.ac.c; Tang Tao, E-mail: ttang@hkbu.edu.h

    2010-11-01

    In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.

  11. Application of theoretical models to active and passive remote sensing of saline ice

    NASA Technical Reports Server (NTRS)

    Han, H. C.; Kong, J. A.; Shin, R. T.; Nghiem, S. V.; Kwok, R.

    1992-01-01

    The random medium model is used to interpret the polarimetric active and passive measurements of saline ice. The ice layer is described as a host ice medium embedded with randomly distributed inhomogeneities, and the underlying sea water is considered as a homogeneous half-space. The scatterers in the ice layer are modeled with an ellipsoidal correlation function. The orientation of the scatterers is vertically aligned and azimuthally random. The strong permittivity fluctuation theory is used to calculate the effective permittivity and the distorted Born approximation is used to obtain the polarimetric scattering coefficients. Thermal emissions based on the reciprocity and energy conservation principles are calculated. The effects of the random roughness at the air-ice, and ice-water interfaces are explained by adding the surface scattering to the volume scattering return incoherently. The theoretical model, which has been successfully applied to analyze the radar backscatter data of first-year sea ice, is used to interpret the measurements performed in the Cold Regions Research and Engineering Laboratory's CRRELEX program.

  12. Statistical methods for efficient design of community surveys of response to noise: Random coefficients regression models

    NASA Technical Reports Server (NTRS)

    Tomberlin, T. J.

    1985-01-01

    Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.

  13. Wave-induced fluid flow in random porous media: Attenuation and dispersion of elastic waves

    NASA Astrophysics Data System (ADS)

    Müller, Tobias M.; Gurevich, Boris

    2005-05-01

    A detailed analysis of the relationship between elastic waves in inhomogeneous, porous media and the effect of wave-induced fluid flow is presented. Based on the results of the poroelastic first-order statistical smoothing approximation applied to Biot's equations of poroelasticity, a model for elastic wave attenuation and dispersion due to wave-induced fluid flow in 3-D randomly inhomogeneous poroelastic media is developed. Attenuation and dispersion depend on linear combinations of the spatial correlations of the fluctuating poroelastic parameters. The observed frequency dependence is typical for a relaxation phenomenon. Further, the analytic properties of attenuation and dispersion are analyzed. It is shown that the low-frequency asymptote of the attenuation coefficient of a plane compressional wave is proportional to the square of frequency. At high frequencies the attenuation coefficient becomes proportional to the square root of frequency. A comparison with the 1-D theory shows that attenuation is of the same order but slightly larger in 3-D random media. Several modeling choices of the approach including the effect of cross correlations between fluid and solid phase properties are demonstrated. The potential application of the results to real porous materials is discussed. .

  14. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    NASA Astrophysics Data System (ADS)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  15. Scattering from a random layer of leaves in the physical optics limit

    NASA Technical Reports Server (NTRS)

    Lang, R. H.; Seker, S. S.; Le Vine, D. M.

    1982-01-01

    Backscatter of electromagnetic radiation from a layer of vegetation over flat lossy ground has been studied in collaborative research at the George Washingnton University and the Goddard Space Flight Center. In this work the vegetation is composed of leaves which are modeled by a random collection of lossy dielectric disks. Backscattering coefficients for the vegetation layer have been calculated in the case of disks whose diameter is large compared to wavelength. These backscattering coefficients are obtained in terms of the scattering amplitude of an individual disk by employing the distorted Born procedure. The scattering amplitude for a disk which is large compared to wavelength is then found by physical optic techniques. Computed results are interpreted in terms of dominant reflected and transmitted contributions from the disks and ground.

  16. On S.N. Bernstein’s derivation of Mendel’s Law and ‘rediscovery’ of the Hardy-Weinberg distribution

    PubMed Central

    Stark, Alan; Seneta, Eugene

    2012-01-01

    Around 1923 the soon-to-be famous Soviet mathematician and probabilist Sergei N. Bernstein started to construct an axiomatic foundation of a theory of heredity. He began from the premise of stationarity (constancy of type proportions) from the first generation of offspring. This led him to derive the Mendelian coefficients of heredity. It appears that he had no direct influence on the subsequent development of population genetics. A basic assumption of Bernstein was that parents coupled randomly to produce offspring. This paper shows that a simple model of non-random mating, which nevertheless embodies a feature of the Hardy-Weinberg Law, can produce Mendelian coefficients of heredity while maintaining the population distribution. How W. Johannsen’s monograph influenced Bernstein is discussed. PMID:22888285

  17. Measuring Developmental Students' Mathematics Anxiety

    ERIC Educational Resources Information Center

    Ding, Yanqing

    2016-01-01

    This study conducted an item-level analysis of mathematics anxiety and examined the dimensionality of mathematics anxiety in a sample of developmental mathematics students (N = 162) by Multi-dimensional Random Coefficients Multinominal Logit Model (MRCMLM). The results indicate a moderately correlated factor structure of mathematics anxiety (r =…

  18. Fermi problem in disordered systems

    NASA Astrophysics Data System (ADS)

    Menezes, G.; Svaiter, N. F.; de Mello, H. R.; Zarro, C. A. D.

    2017-10-01

    We revisit the Fermi two-atom problem in the framework of disordered systems. In our model, we consider a two-qubit system linearly coupled with a quantum massless scalar field. We analyze the energy transfer between the qubits under different experimental perspectives. In addition, we assume that the coefficients of the Klein-Gordon equation are random functions of the spatial coordinates. The disordered medium is modeled by a centered, stationary, and Gaussian process. We demonstrate that the classical notion of causality emerges only in the wave zone in the presence of random fluctuations of the light cone. Possible repercussions are discussed.

  19. A Combined Cognitive Stimulation and Physical Exercise Programme (MINDVital) in Early Dementia: Differential Effects on Single- and Dual-Task Gait Performance.

    PubMed

    Tay, Laura; Lim, Wee Shiong; Chan, Mark; Ali, Noorhazlina; Chong, Mei Sian

    2016-01-01

    Gait disorders are common in early dementia, with particularly pronounced dual-task deficits, contributing to the increased fall risk and mobility decline associated with cognitive impairment. This study examines the effects of a combined cognitive stimulation and physical exercise programme (MINDVital) on gait performance under single- and dual-task conditions in older adults with mild dementia. Thirty-nine patients with early dementia participated in a multi-disciplinary rehabilitation programme comprising both physical exercise and cognitive stimulation. The programme was conducted in 8-week cycles with participants attending once weekly, and all participants completed 2 successive cycles. Cognitive, functional performance and behavioural symptoms were assessed at baseline and at the end of each 8-week cycle. Gait speed was examined under both single- (Timed Up and Go and 6-metre walk tests) and dual-task (animal category and serial counting) conditions. A random effects model was performed for the independent effect of MINDVital on the primary outcome variable of gait speed under dual-task conditions. The mean age of patients enroled in the rehabilitation programme was 79 ± 6.2 years; 25 (64.1%) had a diagnosis of Alzheimer's dementia, and 26 (66.7%) were receiving a cognitive enhancer therapy. There was a significant improvement in cognitive performance [random effects coefficient (standard error) = 0.90 (0.31), p = 0.003] and gait speed under both dual-task situations [animal category: random effects coefficient = 0.04 (0.02), p = 0.039; serial counting: random effects coefficient = 0.05 (0.02), p = 0.013], with reduced dual-task cost for gait speed [serial counting: random effects coefficient = -4.05 (2.35), p = 0.086] following successive MINDVital cycles. No significant improvement in single-task gait speed was observed. Improved cognitive performance over time was a significant determinant of changes in dual-task gait speed [random effects coefficients = 0.01 (0.005), p = 0.048, and 0.02 (0.005), p = 0.003 for category fluency and counting backwards, respectively]. A combined physical and cognitive rehabilitation programme leads to significant improvements in dual-task walking in early dementia, which may be contributed by improvement in cognitive performance, as single-task gait performance remained stable. © 2016 S. Karger AG, Basel.

  20. Nonergodic property of the space-time coupled CTRW: Dependence on the long-tailed property and correlation

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Li, Baohe; Chen, Xiaosong

    2018-02-01

    The space-time coupled continuous time random walk model is a stochastic framework of anomalous diffusion with many applications in physics, geology and biology. In this manuscript the time averaged mean squared displacement and nonergodic property of a space-time coupled continuous time random walk model is studied, which is a prototype of the coupled continuous time random walk presented and researched intensively with various methods. The results in the present manuscript show that the time averaged mean squared displacements increase linearly with lag time which means ergodicity breaking occurs, besides, we find that the diffusion coefficient is intrinsically random which shows both aging and enhancement, the analysis indicates that the either aging or enhancement phenomena are determined by the competition between the correlation exponent γ and the waiting time's long-tailed index α.

  1. Random walks on cubic lattices with bond disorder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ernst, M.H.; van Velthoven, P.F.J.

    1986-12-01

    The authors consider diffusive systems with static disorder, such as Lorentz gases, lattice percolation, ants in a labyrinth, termite problems, random resistor networks, etc. In the case of diluted randomness the authors can apply the methods of kinetic theory to obtain systematic expansions of dc and ac transport properties in powers of the impurity concentration c. The method is applied to a hopping model on a d-dimensional cubic lattice having two types of bonds with conductivity sigma and sigma/sub 0/ = 1, with concentrations c and 1-c, respectively. For the square lattice the authors explicitly calculate the diffusion coefficient D(c,sigma)more » as a function of c, to O(c/sup 2/) terms included for different ratios of the bond conductivity sigma. The probability of return at long times is given by P/sub 0/(t) approx. (4..pi..D(c,sigma)t)/sup -d/2/, which is determined by the diffusion coefficient of the disordered system.« less

  2. Solving large test-day models by iteration on data and preconditioned conjugate gradient.

    PubMed

    Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A

    1999-12-01

    A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.

  3. A new cellular automata model of traffic flow with negative exponential weighted look-ahead potential

    NASA Astrophysics Data System (ADS)

    Ma, Xiao; Zheng, Wei-Fan; Jiang, Bao-Shan; Zhang, Ji-Ye

    2016-10-01

    With the development of traffic systems, some issues such as traffic jams become more and more serious. Efficient traffic flow theory is needed to guide the overall controlling, organizing and management of traffic systems. On the basis of the cellular automata model and the traffic flow model with look-ahead potential, a new cellular automata traffic flow model with negative exponential weighted look-ahead potential is presented in this paper. By introducing the negative exponential weighting coefficient into the look-ahead potential and endowing the potential of vehicles closer to the driver with a greater coefficient, the modeling process is more suitable for the driver’s random decision-making process which is based on the traffic environment that the driver is facing. The fundamental diagrams for different weighting parameters are obtained by using numerical simulations which show that the negative exponential weighting coefficient has an obvious effect on high density traffic flux. The complex high density non-linear traffic behavior is also reproduced by numerical simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11572264, 11172247, 11402214, and 61373009).

  4. Absorption and scattering coefficient dependence of laser-Doppler flowmetry models for large tissue volumes.

    PubMed

    Binzoni, T; Leung, T S; Rüfenacht, D; Delpy, D T

    2006-01-21

    Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware.

  5. Aircraft Airframe Cost Estimation Using a Random Coefficients Model

    DTIC Science & Technology

    1979-12-01

    approach will also be used here. 2 Model Formulation Several different types of equations could be used for the basic form of the CER, such as linear ...5) Marcotte developed several CER’s for fighter aircraft airframes using the log- linear model . A plot of the residuals from the CER for recurring...of the natural logarithm. Ordinary Least Squares The ordinary least squares procedure starts with the equation for the general linear model . The

  6. Functional form and risk adjustment of hospital costs: Bayesian analysis of a Box-Cox random coefficients model.

    PubMed

    Hollenbeak, Christopher S

    2005-10-15

    While risk-adjusted outcomes are often used to compare the performance of hospitals and physicians, the most appropriate functional form for the risk adjustment process is not always obvious for continuous outcomes such as costs. Semi-log models are used most often to correct skewness in cost data, but there has been limited research to determine whether the log transformation is sufficient or whether another transformation is more appropriate. This study explores the most appropriate functional form for risk-adjusting the cost of coronary artery bypass graft (CABG) surgery. Data included patients undergoing CABG surgery at four hospitals in the midwest and were fit to a Box-Cox model with random coefficients (BCRC) using Markov chain Monte Carlo methods. Marginal likelihoods and Bayes factors were computed to perform model comparison of alternative model specifications. Rankings of hospital performance were created from the simulation output and the rankings produced by Bayesian estimates were compared to rankings produced by standard models fit using classical methods. Results suggest that, for these data, the most appropriate functional form is not logarithmic, but corresponds to a Box-Cox transformation of -1. Furthermore, Bayes factors overwhelmingly rejected the natural log transformation. However, the hospital ranking induced by the BCRC model was not different from the ranking produced by maximum likelihood estimates of either the linear or semi-log model. Copyright (c) 2005 John Wiley & Sons, Ltd.

  7. Gene expression models for prediction of longitudinal dispersion coefficient in streams

    NASA Astrophysics Data System (ADS)

    Sattar, Ahmed M. A.; Gharabaghi, Bahram

    2015-05-01

    Longitudinal dispersion is the key hydrologic process that governs transport of pollutants in natural streams. It is critical for spill action centers to be able to predict the pollutant travel time and break-through curves accurately following accidental spills in urban streams. This study presents a novel gene expression model for longitudinal dispersion developed using 150 published data sets of geometric and hydraulic parameters in natural streams in the United States, Canada, Europe, and New Zealand. The training and testing of the model were accomplished using randomly-selected 67% (100 data sets) and 33% (50 data sets) of the data sets, respectively. Gene expression programming (GEP) is used to develop empirical relations between the longitudinal dispersion coefficient and various control variables, including the Froude number which reflects the effect of reach slope, aspect ratio, and the bed material roughness on the dispersion coefficient. Two GEP models have been developed, and the prediction uncertainties of the developed GEP models are quantified and compared with those of existing models, showing improved prediction accuracy in favor of GEP models. Finally, a parametric analysis is performed for further verification of the developed GEP models. The main reason for the higher accuracy of the GEP models compared to the existing regression models is that exponents of the key variables (aspect ratio and bed material roughness) are not constants but a function of the Froude number. The proposed relations are both simple and accurate and can be effectively used to predict the longitudinal dispersion coefficients in natural streams.

  8. Multidimensional Computerized Adaptive Testing for Indonesia Junior High School Biology

    ERIC Educational Resources Information Center

    Kuo, Bor-Chen; Daud, Muslem; Yang, Chih-Wei

    2015-01-01

    This paper describes a curriculum-based multidimensional computerized adaptive test that was developed for Indonesia junior high school Biology. In adherence to the Indonesian curriculum of different Biology dimensions, 300 items was constructed, and then tested to 2238 students. A multidimensional random coefficients multinomial logit model was…

  9. Microscopic Interpretation and Generalization of the Bloch-Torrey Equation for Diffusion Magnetic Resonance

    PubMed Central

    Seroussi, Inbar; Grebenkov, Denis S.; Pasternak, Ofer; Sochen, Nir

    2017-01-01

    In order to bridge microscopic molecular motion with macroscopic diffusion MR signal in complex structures, we propose a general stochastic model for molecular motion in a magnetic field. The Fokker-Planck equation of this model governs the probability density function describing the diffusion-magnetization propagator. From the propagator we derive a generalized version of the Bloch-Torrey equation and the relation to the random phase approach. This derivation does not require assumptions such as a spatially constant diffusion coefficient, or ad-hoc selection of a propagator. In particular, the boundary conditions that implicitly incorporate the microstructure into the diffusion MR signal can now be included explicitly through a spatially varying diffusion coefficient. While our generalization is reduced to the conventional Bloch-Torrey equation for piecewise constant diffusion coefficients, it also predicts scenarios in which an additional term to the equation is required to fully describe the MR signal. PMID:28242566

  10. Do the methods used to analyse missing data really matter? An examination of data from an observational study of Intermediate Care patients.

    PubMed

    Kaambwa, Billingsley; Bryan, Stirling; Billingham, Lucinda

    2012-06-27

    Missing data is a common statistical problem in healthcare datasets from populations of older people. Some argue that arbitrarily assuming the mechanism responsible for the missingness and therefore the method for dealing with this missingness is not the best option-but is this always true? This paper explores what happens when extra information that suggests that a particular mechanism is responsible for missing data is disregarded and methods for dealing with the missing data are chosen arbitrarily. Regression models based on 2,533 intermediate care (IC) patients from the largest evaluation of IC done and published in the UK to date were used to explain variation in costs, EQ-5D and Barthel index. Three methods for dealing with missingness were utilised, each assuming a different mechanism as being responsible for the missing data: complete case analysis (assuming missing completely at random-MCAR), multiple imputation (assuming missing at random-MAR) and Heckman selection model (assuming missing not at random-MNAR). Differences in results were gauged by examining the signs of coefficients as well as the sizes of both coefficients and associated standard errors. Extra information strongly suggested that missing cost data were MCAR. The results show that MCAR and MAR-based methods yielded similar results with sizes of most coefficients and standard errors differing by less than 3.4% while those based on MNAR-methods were statistically different (up to 730% bigger). Significant variables in all regression models also had the same direction of influence on costs. All three mechanisms of missingness were shown to be potential causes of the missing EQ-5D and Barthel data. The method chosen to deal with missing data did not seem to have any significant effect on the results for these data as they led to broadly similar conclusions with sizes of coefficients and standard errors differing by less than 54% and 322%, respectively. Arbitrary selection of methods to deal with missing data should be avoided. Using extra information gathered during the data collection exercise about the cause of missingness to guide this selection would be more appropriate.

  11. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Crevillén-García, D.; Power, H.

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  12. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media.

    PubMed

    Crevillén-García, D; Power, H

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  13. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    PubMed Central

    Power, H.

    2017-01-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974

  14. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks.

    PubMed

    Hewitt, Angela L; Popa, Laurentiu S; Pasalar, Siavash; Hendrix, Claudia M; Ebner, Timothy J

    2011-11-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of R(adj)(2)), followed by position (28 ± 24% of R(adj)(2)) and speed (11 ± 19% of R(adj)(2)). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower R(adj)(2) values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics.

  15. Prediction models for clustered data: comparison of a random intercept and standard regression model

    PubMed Central

    2013-01-01

    Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436

  16. Prediction models for clustered data: comparison of a random intercept and standard regression model.

    PubMed

    Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne

    2013-02-15

    When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.

  17. Effects of the coupling strength of a voltage probe on the conductance coefficients in a three-lead microstructure

    NASA Astrophysics Data System (ADS)

    Iida, S.

    1991-03-01

    Using statistical scattering theory, we calculate the average and the variance of the conductance coefficients at zero temperature for a small disordered metallic wire composed of three arms. Each arm is coupled at the end to a perfectly conducting lead. The disorder is modeled by a microscopic random Hamiltonian belonging to the Gaussian orthogonal ensemble. As the coupling strength of the third arm (voltage probe) is increased, the variance of the conductance coefficient of the main track changes from the universal value of the two-lead geometry to that of the three-lead geometry. The variance of the resistance coefficient is strongly affected by the coupling strength of the arm whose resistance is being measured and has a relatively weak dependence on those of the other two arms.

  18. Online technique for detecting state of onboard fiber optic gyroscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, Zhiyong; He, Kunpeng, E-mail: pengkhe@126.com; Pang, Shuwan

    2015-02-15

    Although angle random walk (ARW) of fiber optic gyroscope (FOG) has been well modeled and identified before being integrated into the high-accuracy attitude control system of satellite, aging and unexpected failures can affect the performance of FOG after launch, resulting in the variation of ARW coefficient. Therefore, the ARW coefficient can be regarded as an indicator of “state of health” for FOG diagnosis in some sense. The Allan variance method can be used to estimate ARW coefficient of FOG, however, it requires a large amount of data to be stored. Moreover, the procedure of drawing slope lines for estimation ismore » painful. To overcome the barriers, a weighted state-space model that directly models the ARW to obtain a nonlinear state-space model was established for FOG. Then, a neural extended-Kalman filter algorithm was implemented to estimate and track the variation of ARW in real time. The results of experiment show that the proposed approach is valid to detect the state of FOG. Moreover, the proposed technique effectively avoids the storage of data.« less

  19. Application of theoretical models to active and passive remote sensing of saline ice

    NASA Technical Reports Server (NTRS)

    Han, H. C.; Kong, Jin AU; Shin, Robert T.; Nghiem, Son V.; Kwok, R.

    1992-01-01

    The random medium model is used to interpret the polarimetric active and passive measurements of saline ice. The ice layer is described as a host ice medium embedded with randomly distributed inhomogeneities, and the underlying sea water is considered as a homogeneous half-space. The scatterers in the ice layer are modeled with an ellipsoidal correlation function. The orientation of the scatterers is vertically aligned and azimuthally random. The strong permittivity fluctuation theory is employed to calculate the effective permittivity and the distorted Born approximation is used to obtain the polarimetric scattering coefficients. We also calculate the thermal emissions based on the reciprocity and energy conservation principles. The effects of the random roughness at the air-ice, and ice-water interfaces are accounted for by adding the surface scattering to the volume scattering return incoherently. The above theoretical model, which has been successfully applied to analyze the radar backscatter data of the first-year sea ice near Point Barrow, AK, is used to interpret the measurements performed in the CRRELEX program.

  20. A Experimental Investigation of Hydrodynamic Forces on Circular Cylinders in Sinusoidal and Random Oscillating Flow

    NASA Astrophysics Data System (ADS)

    Longoria, Raul Gilberto

    An experimental apparatus has been developed which can be used to generate a general time-dependent planar flow across a cylinder. A mass of water enclosed with no free surface within a square cross-section tank and two spring pre-loaded pistons is oscillated using a hydraulic actuator. A circular cylinder is suspended horizontally in the tank by two X-Y force transducers used to simultaneously measure the total in-line and transverse forces. Fluid motion is measured using a differential pressure transducer for instantaneous acceleration and an LVDT for displacement. This investigation provides measurement of forces on cylinders subjected to planar fluid flow velocity with a time (and frequency) dependence which more accurately represent the random conditions encountered in a natural ocean environment. The use of the same apparatus for both sinusoidal and random experiments provides a quantified assessment of the applicability of sinusoidal planar oscillatory flow data in offshore structure design methods. The drag and inertia coefficients for a Morison equation representation of the inline force are presented for both sinusoidal and random flow. Comparison of the sinusoidal results is favorable with those of previous investigations. The results from random experiments illustrates the difference in the force mechanism by contrasting the force transfer coefficients for the inline and transverse forces. It is found that application of sinusoidal results to random hydrodynamic inline force prediction using the Morison equation wrongly weighs the drag and inertia components, and the transverse force is overpredicted. The use of random planar oscillatory flow in the laboratory, contrasted with sinusoidal planar oscillatory flow, quantifies the accepted belief that the force transfer coefficients from sinusoidal flow experiments are conservative for prediction of forces on cylindrical structures subjected to random sea waves and the ensuing forces. Further analysis of data is conducted in the frequency domain to illustrate models used for predicting the power spectral density of the inline force including a nonlinear describing function method. It is postulated that the large-scale vortex activity prominent in sinusoidal oscillatory flow is subdued in random flow conditions.

  1. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  2. Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance

    NASA Astrophysics Data System (ADS)

    Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra

    2017-06-01

    In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.

  3. Comparison of classification methods for voxel-based prediction of acute ischemic stroke outcome following intra-arterial intervention

    NASA Astrophysics Data System (ADS)

    Winder, Anthony J.; Siemonsen, Susanne; Flottmann, Fabian; Fiehler, Jens; Forkert, Nils D.

    2017-03-01

    Voxel-based tissue outcome prediction in acute ischemic stroke patients is highly relevant for both clinical routine and research. Previous research has shown that features extracted from baseline multi-parametric MRI datasets have a high predictive value and can be used for the training of classifiers, which can generate tissue outcome predictions for both intravenous and conservative treatments. However, with the recent advent and popularization of intra-arterial thrombectomy treatment, novel research specifically addressing the utility of predictive classi- fiers for thrombectomy intervention is necessary for a holistic understanding of current stroke treatment options. The aim of this work was to develop three clinically viable tissue outcome prediction models using approximate nearest-neighbor, generalized linear model, and random decision forest approaches and to evaluate the accuracy of predicting tissue outcome after intra-arterial treatment. Therefore, the three machine learning models were trained, evaluated, and compared using datasets of 42 acute ischemic stroke patients treated with intra-arterial thrombectomy. Classifier training utilized eight voxel-based features extracted from baseline MRI datasets and five global features. Evaluation of classifier-based predictions was performed via comparison to the known tissue outcome, which was determined in follow-up imaging, using the Dice coefficient and leave-on-patient-out cross validation. The random decision forest prediction model led to the best tissue outcome predictions with a mean Dice coefficient of 0.37. The approximate nearest-neighbor and generalized linear model performed equally suboptimally with average Dice coefficients of 0.28 and 0.27 respectively, suggesting that both non-linearity and machine learning are desirable properties of a classifier well-suited to the intra-arterial tissue outcome prediction problem.

  4. Functional Mixed Effects Model for Small Area Estimation.

    PubMed

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  5. System level analysis and control of manufacturing process variation

    DOEpatents

    Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.

    2005-05-31

    A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.

  6. Vehicular traffic noise prediction using soft computing approach.

    PubMed

    Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek

    2016-12-01

    A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Perturbed effects at radiation physics

    NASA Astrophysics Data System (ADS)

    Külahcı, Fatih; Şen, Zekâi

    2013-09-01

    Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.

  8. Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro; Abgrall, Remi

    2014-11-01

    Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.

  9. Influences of sampling size and pattern on the uncertainty of correlation estimation between soil water content and its influencing factors

    NASA Astrophysics Data System (ADS)

    Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua

    2017-12-01

    In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.

  10. Relaxation dynamics of maximally clustered networks

    NASA Astrophysics Data System (ADS)

    Klaise, Janis; Johnson, Samuel

    2018-01-01

    We study the relaxation dynamics of fully clustered networks (maximal number of triangles) to an unclustered state under two different edge dynamics—the double-edge swap, corresponding to degree-preserving randomization of the configuration model, and single edge replacement, corresponding to full randomization of the Erdős-Rényi random graph. We derive expressions for the time evolution of the degree distribution, edge multiplicity distribution and clustering coefficient. We show that under both dynamics networks undergo a continuous phase transition in which a giant connected component is formed. We calculate the position of the phase transition analytically using the Erdős-Rényi phenomenology.

  11. Automatic Classification of Aerial Imagery for Urban Hydrological Applications

    NASA Astrophysics Data System (ADS)

    Paul, A.; Yang, C.; Breitkopf, U.; Liu, Y.; Wang, Z.; Rottensteiner, F.; Wallner, M.; Verworn, A.; Heipke, C.

    2018-04-01

    In this paper we investigate the potential of automatic supervised classification for urban hydrological applications. In particular, we contribute to runoff simulations using hydrodynamic urban drainage models. In order to assess whether the capacity of the sewers is sufficient to avoid surcharge within certain return periods, precipitation is transformed into runoff. The transformation of precipitation into runoff requires knowledge about the proportion of drainage-effective areas and their spatial distribution in the catchment area. Common simulation methods use the coefficient of imperviousness as an important parameter to estimate the overland flow, which subsequently contributes to the pipe flow. The coefficient of imperviousness is the percentage of area covered by impervious surfaces such as roofs or road surfaces. It is still common practice to assign the coefficient of imperviousness for each particular land parcel manually by visual interpretation of aerial images. Based on classification results of these imagery we contribute to an objective automatic determination of the coefficient of imperviousness. In this context we compare two classification techniques: Random Forests (RF) and Conditional Random Fields (CRF). Experimental results performed on an urban test area show good results and confirm that the automated derivation of the coefficient of imperviousness, apart from being more objective and, thus, reproducible, delivers more accurate results than the interactive estimation. We achieve an overall accuracy of about 85 % for both classifiers. The root mean square error of the differences of the coefficient of imperviousness compared to the reference is 4.4 % for the CRF-based classification, and 3.8 % for the RF-based classification.

  12. Diffusion and mobility of atomic particles in a liquid

    NASA Astrophysics Data System (ADS)

    Smirnov, B. M.; Son, E. E.; Tereshonok, D. V.

    2017-11-01

    The diffusion coefficient of a test atom or molecule in a liquid is determined for the mechanism where the displacement of the test molecule results from the vibrations and motion of liquid molecules surrounding the test molecule and of the test particle itself. This leads to a random change in the coordinate of the test molecule, which eventually results in the diffusion motion of the test particle in space. Two models parameters of interaction of a particle and a liquid are used to find the activation energy of the diffusion process under consideration: the gas-kinetic cross section for scattering of test molecules in the parent gas and the Wigner-Seitz radius for test molecules. In the context of this approach, we have calculated the diffusion coefficient of atoms and molecules in water, where based on experimental data, we have constructed the dependence of the activation energy for the diffusion of test molecules in water on the interaction parameter and the temperature dependence for diffusion coefficient of atoms or molecules in water within the models considered. The statistically averaged difference of the activation energies for the diffusion coefficients of different test molecules in water that we have calculated based on each of the presented models does not exceed 10% of the diffusion coefficient itself. We have considered the diffusion of clusters in water and present the dependence of the diffusion coefficient on the cluster size. The accuracy of the presented formulas for the diffusion coefficient of atomic particles in water is estimated to be 50%.

  13. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    PubMed

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  14. A distribution model for the aerial application of granular agricultural particles

    NASA Technical Reports Server (NTRS)

    Fernandes, S. T.; Ormsbee, A. I.

    1978-01-01

    A model is developed to predict the shape of the distribution of granular agricultural particles applied by aircraft. The particle is assumed to have a random size and shape and the model includes the effect of air resistance, distributor geometry and aircraft wake. General requirements for the maintenance of similarity of the distribution for scale model tests are derived and are addressed to the problem of a nongeneral drag law. It is shown that if the mean and variance of the particle diameter and density are scaled according to the scaling laws governing the system, the shape of the distribution will be preserved. Distributions are calculated numerically and show the effect of a random initial lateral position, particle size and drag coefficient. A listing of the computer code is included.

  15. Edge union of networks on the same vertex set

    NASA Astrophysics Data System (ADS)

    Loe, Chuan Wen; Jeldtoft Jensen, Henrik

    2013-06-01

    Random network generators such as Erdős-Rényi, Watts-Strogatz and Barabási-Albert models are used as models to study real-world networks. Let G1(V, E1) and G2(V, E2) be two such networks on the same vertex set V. This paper studies the degree distribution and clustering coefficient of the resultant networks, G(V, E1∪E2).

  16. Model-based VQ for image data archival, retrieval and distribution

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  17. Using Fit Indexes to Select a Covariance Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Liu, Siwei; Rovine, Michael J.; Molenaar, Peter C. M.

    2012-01-01

    This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error…

  18. Speech Enhancement Using Gaussian Scale Mixture Models

    PubMed Central

    Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.

    2011-01-01

    This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139

  19. Numerical approach of the injection molding process of fiber-reinforced composite with considering fiber orientation

    NASA Astrophysics Data System (ADS)

    Nguyen Thi, T. B.; Yokoyama, A.; Ota, K.; Kodama, K.; Yamashita, K.; Isogai, Y.; Furuichi, K.; Nonomura, C.

    2014-05-01

    One of the most important challenges in the injection molding process of the short-glass fiber/thermoplastic composite parts is being able to predict the fiber orientation, since it controls the mechanical and the physical properties of the final parts. Folgar and Tucker included into the Jeffery equation a diffusive type of term, which introduces a phenomenological coefficient for modeling the randomizing effect of the mechanical interactions between the fibers, to predict the fiber orientation in concentrated suspensions. Their experiments indicated that this coefficient depends on the fiber volume fraction and aspect ratio. However, a definition of the fiber interaction coefficient, which is very necessary in the fiber orientation simulations, hasn't still been proven yet. Consequently, this study proposed a developed fiber interaction model that has been introduced a fiber dynamics simulation in order to obtain a global fiber interaction coefficient. This supposed that the coefficient is a sum function of the fiber concentration, aspect ratio, and angular velocity. The proposed model was incorporated into a computer aided engineering simulation package C-Mold. Short-glass fiber/polyamide-6 composites were produced in the injection molding with the fiber weight concentration of 30 wt.%, 50 wt.%, and 70 wt.%. The physical properties of these composites were examined, and their fiber orientation distributions were measured by micro-computed-tomography equipment μ-CT. The simulation results showed a good agreement with experiment results.

  20. Analysis of a Split-Plot Experimental Design Applied to a Low-Speed Wind Tunnel Investigation

    NASA Technical Reports Server (NTRS)

    Erickson, Gary E.

    2013-01-01

    A procedure to analyze a split-plot experimental design featuring two input factors, two levels of randomization, and two error structures in a low-speed wind tunnel investigation of a small-scale model of a fighter airplane configuration is described in this report. Standard commercially-available statistical software was used to analyze the test results obtained in a randomization-restricted environment often encountered in wind tunnel testing. The input factors were differential horizontal stabilizer incidence and the angle of attack. The response variables were the aerodynamic coefficients of lift, drag, and pitching moment. Using split-plot terminology, the whole plot, or difficult-to-change, factor was the differential horizontal stabilizer incidence, and the subplot, or easy-to-change, factor was the angle of attack. The whole plot and subplot factors were both tested at three levels. Degrees of freedom for the whole plot error were provided by replication in the form of three blocks, or replicates, which were intended to simulate three consecutive days of wind tunnel facility operation. The analysis was conducted in three stages, which yielded the estimated mean squares, multiple regression function coefficients, and corresponding tests of significance for all individual terms at the whole plot and subplot levels for the three aerodynamic response variables. The estimated regression functions included main effects and two-factor interaction for the lift coefficient, main effects, two-factor interaction, and quadratic effects for the drag coefficient, and only main effects for the pitching moment coefficient.

  1. The estimation of selection coefficients in Afrikaners: Huntington disease, porphyria variegata, and lipoid proteinosis.

    PubMed Central

    Stine, O C; Smith, K D

    1990-01-01

    The effects of mutation, migration, random drift, and selection on the change in frequency of the alleles associated with Huntington disease, porphyria variegata, and lipoid proteinosis have been assessed in the Afrikaner population of South Africa. Although admixture cannot be completely discounted, it was possible to exclude migration and new mutation as major sources of changes in the frequency of these alleles by limiting analyses to pedigrees descendant from founding families. Calculations which overestimated the possible effect of random drift demonstrated that drift did not account for the observed changes in gene frequencies. Therefore these changes must have been caused by natural selection, and a coefficient of selection was estimated for each trait. For the rare, dominant, deleterious allele associated with Huntington disease, the coefficient of selection was estimated to be .34, indicating that this allele has a selective disadvantage, contrary to some recent studies. For the presumed dominant and probably deleterious allele associated with porphyria variegata, the coefficient of selection lies between .07 and .02. The coefficient of selection for the rare, clinically recessive allele associated with lipoid proteinosis was estimated to be .07. Calculations based on a model system indicate that the observed decrease in allele frequency cannot be explained solely on the basis of selection against the homozygote. Thus, this may be an example of a pleiotropic gene which has a dominant effect in terms of selection even though its known clinical effect is recessive. PMID:2137963

  2. The estimation of selection coefficients in Afrikaners: Huntington disease, porphyria variegata, and lipoid proteinosis.

    PubMed

    Stine, O C; Smith, K D

    1990-03-01

    The effects of mutation, migration, random drift, and selection on the change in frequency of the alleles associated with Huntington disease, porphyria variegata, and lipoid proteinosis have been assessed in the Afrikaner population of South Africa. Although admixture cannot be completely discounted, it was possible to exclude migration and new mutation as major sources of changes in the frequency of these alleles by limiting analyses to pedigrees descendant from founding families. Calculations which overestimated the possible effect of random drift demonstrated that drift did not account for the observed changes in gene frequencies. Therefore these changes must have been caused by natural selection, and a coefficient of selection was estimated for each trait. For the rare, dominant, deleterious allele associated with Huntington disease, the coefficient of selection was estimated to be .34, indicating that this allele has a selective disadvantage, contrary to some recent studies. For the presumed dominant and probably deleterious allele associated with porphyria variegata, the coefficient of selection lies between .07 and .02. The coefficient of selection for the rare, clinically recessive allele associated with lipoid proteinosis was estimated to be .07. Calculations based on a model system indicate that the observed decrease in allele frequency cannot be explained solely on the basis of selection against the homozygote. Thus, this may be an example of a pleiotropic gene which has a dominant effect in terms of selection even though its known clinical effect is recessive.

  3. Robust spectral-domain optical coherence tomography speckle model and its cross-correlation coefficient analysis

    PubMed Central

    Liu, Xuan; Ramella-Roman, Jessica C.; Huang, Yong; Guo, Yuan; Kang, Jin U.

    2013-01-01

    In this study, we proposed a generic speckle simulation for optical coherence tomography (OCT) signal, by convolving the point spread function (PSF) of the OCT system with the numerically synthesized random sample field. We validated our model and used the simulation method to study the statistical properties of cross-correlation coefficients (XCC) between Ascans which have been recently applied in transverse motion analysis by our group. The results of simulation show that over sampling is essential for accurate motion tracking; exponential decay of OCT signal leads to an under estimate of motion which can be corrected; lateral heterogeneity of sample leads to an over estimate of motion for a few pixels corresponding to the structural boundary. PMID:23456001

  4. Influence of Dissipative Particle Dynamics parameters and wall models on planar micro-channel flows

    NASA Astrophysics Data System (ADS)

    Wang, Yuyi; She, Jiangwei; Zhou, Zhe-Wei; microflow Group Team

    2017-11-01

    Dissipative Particle Dynamics (DPD) is a very effective approach in simulating mesoscale hydrodynamics. The influence of solid boundaries and DPD parameters are typically very strong in DPD simulations. The present work studies a micro-channel Poisseuille flow. Taking the neutron scattering experiment and molecular dynamics simulation result as bench mark, the DPD results of density distribution and velocity profile are systematically studied. The influence of different levels of coarse-graining, the number densities of wall and fluid, conservative force coefficients, random and dissipative force coefficients, different wall model and reflective boundary conditions are discussed. Some mechanisms behind such influences are discussed and the artifacts in the simulation are identified with the bench mark. Chinese natural science foundation (A020405).

  5. Detonation initiation in a model of explosive: Comparative atomistic and hydrodynamics simulations

    NASA Astrophysics Data System (ADS)

    Murzov, S. A.; Sergeev, O. V.; Dyachkov, S. A.; Egorova, M. S.; Parshikov, A. N.; Zhakhovsky, V. V.

    2016-11-01

    Here we extend consistent simulations to reactive materials by the example of AB model explosive. The kinetic model of chemical reactions observed in a molecular dynamics (MD) simulation of self-sustained detonation wave can be used in hydrodynamic simulation of detonation initiation. Kinetic coefficients are obtained by minimization of difference between profiles of species calculated from the kinetic model and observed in MD simulations of isochoric thermal decomposition with a help of downhill simplex method combined with random walk in multidimensional space of fitting kinetic model parameters.

  6. Robust, Adaptive Functional Regression in Functional Mixed Model Framework.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2011-09-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.

  7. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015

  8. Drag coefficient Variability and Thermospheric models

    NASA Astrophysics Data System (ADS)

    Moe, Kenneth

    Satellite drag coefficients depend upon a variety of factors: The shape of the satellite, its altitude, the eccentricity of its orbit, the temperature and mean molecular mass of the ambient atmosphere, and the time in the sunspot cycle. At altitudes where the mean free path of the atmospheric molecules is large compared to the dimensions of the satellite, the drag coefficients can be determined from the theory of free-molecule flow. The dependence on altitude is caused by the concentration of atomic oxygen which plays an important role by its ability to adsorb on the satellite surface and thereby affect the energy loss of molecules striking the surface. The eccentricity of the orbit determines the satellite velocity at perigee, and therefore the energy of the incident molecules relative to the energy of adsorption of atomic oxygen atoms on the surface. The temperature of the ambient atmosphere determines the extent to which the random thermal motion of the molecules influences the momentum transfer to the satellite. The time in the sunspot cycle affects the ambient temperature as well as the concentration of atomic oxygen at a particular altitude. Tables and graphs will be used to illustrate the variability of drag coefficients. Before there were any measurements of gas-surface interactions in orbit, Izakov and Cook independently made an excellent estimate that the drag coefficient of satellites of compact shape would be 2.2. That numerical value, independent of altitude, was used by Jacchia to construct his model from the early measurements of satellite drag. Consequently, there is an altitude dependent bias in the model. From the sparce orbital experiments that have been done, we know that the molecules which strike satellite surfaces rebound in a diffuse angular distribution with an energy loss given by the energy accommodation coefficient. As more evidence accumulates on the energy loss, more realistic drag coefficients are being calculated. These improved drag coefficients help evaluate the biases in present models. Moreover, they make possible the derivation of accurate densities from accelerometer measurements.

  9. Time-delayed feedback control of diffusion in random walkers.

    PubMed

    Ando, Hiroyasu; Takehara, Kohta; Kobayashi, Miki U

    2017-07-01

    Time delay in general leads to instability in some systems, while specific feedback with delay can control fluctuated motion in nonlinear deterministic systems to a stable state. In this paper, we consider a stochastic process, i.e., a random walk, and observe its diffusion phenomenon with time-delayed feedback. As a result, the diffusion coefficient decreases with increasing delay time. We analytically illustrate this suppression of diffusion by using stochastic delay differential equations and justify the feasibility of this suppression by applying time-delayed feedback to a molecular dynamics model.

  10. Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; L'Esperance, A.

    2017-01-01

    A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.

  11. Testing a single regression coefficient in high dimensional linear models

    PubMed Central

    Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2017-01-01

    In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668

  12. Testing a single regression coefficient in high dimensional linear models.

    PubMed

    Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2016-11-01

    In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.

  13. A Portuguese value set for the SF-6D.

    PubMed

    Ferreira, Lara N; Ferreira, Pedro L; Pereira, Luis N; Brazier, John; Rowen, Donna

    2010-08-01

    The SF-6D is a preference-based measure of health derived from the SF-36 that can be used for cost-effectiveness analysis using cost-per-quality adjusted life-year analysis. This study seeks to estimate a system weight for the SF-6D for Portugal and to compare the results with the UK system weights. A sample of 55 health states defined by the SF-6D has been valued by a representative random sample of the Portuguese population, stratified by sex and age (n = 140), using the Standard Gamble (SG). Several models are estimated at both the individual and aggregate levels for predicting health-state valuations. Models with main effects, with interaction effects and with the constant forced to unity are presented. Random effects (RE) models are estimated using generalized least squares (GLS) regressions. Generalized estimation equations (GEE) are used to estimate RE models with the constant forced to unity. Estimations at the individual level were performed using 630 health-state valuations. Alternative functional forms are considered to account for the skewed distribution of health-state valuations. The models are analyzed in terms of their coefficients, overall fit, and the ability for predicting the SG-values. The RE models estimated using GLS and through GEE produce significant coefficients, which are robust across model specification. However, there are concerns regarding some inconsistent estimates, and so parsimonious consistent models were estimated. There is evidence of under prediction in some states assigned to poor health. The results are consistent with the UK results. The models estimated provide preference-based quality of life weights for the Portuguese population when health status data have been collected using the SF-36. Although the sample was randomly drowned findings should be treated with caution, given the small sample size, even knowing that they have been estimated at the individual level.

  14. Backward deletion to minimize prediction errors in models from factorial experiments with zero to six center points

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1980-01-01

    Population model coefficients were chosen to simulate a saturated 2 to the fourth power fixed effects experiment having an unfavorable distribution of relative values. Using random number studies, deletion strategies were compared that were based on the F distribution, on an order statistics distribution of Cochran's, and on a combination of the two. Results of the comparisons and a recommended strategy are given.

  15. Ensemble Solute Transport in 2-D Operator-Stable Random Fields

    NASA Astrophysics Data System (ADS)

    Monnig, N. D.; Benson, D. A.

    2006-12-01

    The heterogeneous velocity field that exists at many scales in an aquifer will typically cause a dissolved solute plume to grow at a rate faster than Fick's Law predicts. Some statistical model must be adopted to account for the aquifer structure that engenders the velocity heterogeneity. A fractional Brownian motion (fBm) model has been shown to create the long-range correlation that can produce continually faster-than-Fickian plume growth. Previous fBm models have assumed isotropic scaling (defined here by a scalar Hurst coefficient). Motivated by field measurements of aquifer hydraulic conductivity, recent techniques were developed to construct random fields with anisotropic scaling with a self-similarity parameter that is defined by a matrix. The growth of ensemble plumes is analyzed for transport through 2-D "operator- stable" fBm hydraulic conductivity (K) fields. Both the longitudinal and transverse Hurst coefficients are important to both plume growth rates and the timing and duration of breakthrough. Smaller Hurst coefficients in the transverse direction lead to more "continuity" or stratification in the direction of transport. The result is continually faster-than-Fickian growth rates, highly non-Gaussian ensemble plumes, and a longer tail early in the breakthrough curve. Contrary to some analytic stochastic theories for monofractal K fields, the plume growth rate never exceeds Mercado's [1967] purely stratified aquifer growth rate of plume apparent dispersivity proportional to mean distance. Apparent super-Mercado growth must be the result of other factors, such as larger plumes corresponding to either a larger initial plume size or greater variance of the ln(K) field.

  16. Transport of Charged Particles in Turbulent Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Parashar, T.; Subedi, P.; Sonsrettee, W.; Blasi, P.; Ruffolo, D. J.; Matthaeus, W. H.; Montgomery, D.; Chuychai, P.; Dmitruk, P.; Wan, M.; Chhiber, R.

    2017-12-01

    Magnetic fields permeate the Universe. They are found in planets, stars, galaxies, and the intergalactic medium. The magnetic field found in these astrophysical systems are usually chaotic, disordered, and turbulent. The investigation of the transport of cosmic rays in magnetic turbulence is a subject of considerable interest. One of the important aspects of cosmic ray transport is to understand their diffusive behavior and to calculate the diffusion coefficient in the presence of these turbulent fields. Research has most frequently concentrated on determining the diffusion coefficient in the presence of a mean magnetic field. Here, we will particularly focus on calculating diffusion coefficients of charged particles and magnetic field lines in a fully three-dimensional isotropic turbulent magnetic field with no mean field, which may be pertinent to many astrophysical situations. For charged particles in isotropic turbulence we identify different ranges of particle energy depending upon the ratio of the Larmor radius of the charged particle to the characteristic outer length scale of the turbulence. Different theoretical models are proposed to calculate the diffusion coefficient, each applicable to a distinct range of particle energies. The theoretical ideas are tested against results of detailed numerical experiments using Monte-Carlo simulations of particle propagation in stochastic magnetic fields. We also discuss two different methods of generating random magnetic field to study charged particle propagation using numerical simulation. One method is the usual way of generating random fields with a specified power law in wavenumber space, using Gaussian random variables. Turbulence, however, is non-Gaussian, with variability that comes in bursts called intermittency. We therefore devise a way to generate synthetic intermittent fields which have many properties of realistic turbulence. Possible applications of such synthetically generated intermittent fields are discussed.

  17. Resonance energy transfer process in nanogap-based dual-color random lasing

    NASA Astrophysics Data System (ADS)

    Shi, Xiaoyu; Tong, Junhua; Liu, Dahe; Wang, Zhaona

    2017-04-01

    The resonance energy transfer (RET) process between Rhodamine 6G and oxazine in the nanogap-based random systems is systematically studied by revealing the variations and fluctuations of RET coefficients with pump power density. Three working regions stable fluorescence, dynamic laser, and stable laser are thus demonstrated in the dual-color random systems. The stable RET coefficients in fluorescence and lasing regions are generally different and greatly dependent on the donor concentration and the donor-acceptor ratio. These results may provide a way to reveal the energy distribution regulars in the random system and to design the tunable multi-color coherent random lasers for colorful imaging.

  18. Using Multidimensional Rasch Analysis to Validate the Chinese Version of the Motivated Strategies for Learning Questionnaire (MSLQ-CV)

    ERIC Educational Resources Information Center

    Lee, John Chi-Kin; Zhang, Zhonghua; Yin, Hongbiao

    2010-01-01

    This article used the multidimensional random coefficients multinomial logit model to examine the construct validity and detect the substantial differential item functioning (DIF) of the Chinese version of motivated strategies for learning questionnaire (MSLQ-CV). A total of 1,354 Hong Kong junior high school students were administered the…

  19. Extrapolating intensified forest inventory data to the surrounding landscape using landsat

    Treesearch

    Evan B. Brooks; John W. Coulston; Valerie A. Thomas; Randolph H. Wynne

    2015-01-01

    In 2011, a collection of spatially intensified plots was established on three of the Experimental Forests and Ranges (EFRs) sites with the intent of facilitating FIA program objectives for regional extrapolation. Characteristic coefficients from harmonic regression (HR) analysis of associated Landsat stacks are used as inputs into a conditional random forests model to...

  20. Reliability of Total Test Scores When Considered as Ordinal Measurements

    ERIC Educational Resources Information Center

    Biswas, Ajoy Kumar

    2006-01-01

    This article studies the ordinal reliability of (total) test scores. This study is based on a classical-type linear model of observed score (X), true score (T), and random error (E). Based on the idea of Kendall's tau-a coefficient, a measure of ordinal reliability for small-examinee populations is developed. This measure is extended to large…

  1. Quantitative broadband ultrasonic backscatter - An approach to nondestructive evaluation in acoustically inhomogeneous materials

    NASA Technical Reports Server (NTRS)

    Odonnell, M.; Miller, J. G.

    1981-01-01

    The use of a broadband backscatter technique to obtain the frequency dependence of the longitudinal-wave ultrasonic backscatter coefficient from a collection of scatterers in a solid is investigated. Measurements of the backscatter coefficient were obtained over the range of ultrasonic wave vector magnitude-glass sphere radius product between 0.1 and 3.0 from model systems consisting of dilute suspensions of randomly distributed crown glass spheres in hardened polyester resin. The results of these measurements were in good agreement with theoretical prediction. Consequently, broadband measurements of the ultrasonic backscatter coefficient may represent a useful approach toward characterizing the physical properties of scatterers in intrinsically inhomogeneous materials such as composites, metals, and ceramics, and may represent an approach toward nondestructive evaluation of these materials.

  2. Polarimetric scattering from layered media with multiple species of scatterers

    NASA Technical Reports Server (NTRS)

    Nghiem, S. V.; Kwok, R.; Yueh, S. H.; Kong, J. A.; Hsu, C. C.; Tassoudji, M. A.; Shin, R. T.

    1995-01-01

    Geophysical media are usually heterogeneous and contain multiple species of scatterers. In this paper a model is presented to calculate effective permittivities and polarimetric backscattering coefficients of multispecies-layered media. The same physical description is consistently used in the derivation of both permittivities and scattering coefficients. The strong permittivity fluctuation theory is extended to account for the multiple species of scatterers with a general ellipsoidal shape whose orientations are randomly distributed. Under the distorted Born approximation, polarimetric scattering coefficients are obtained. These calculations are applicable to the special cases of spheroidal and spherical scatterers. The model is used to study effects of scatterer shapes and multispecies mixtures on polarimetric signatures of heterogeneous media. The multispecies model accounts for moisture content in scattering media such as snowpack in an ice sheet. The results indicate a high sensitivity of backscatter to moisture with a stronger dependence for drier snow and ice grain size is important to the backscatter. For frost-covered saline ice, model results for bare ice are compared with measured data at C band and then the frost flower formation is simulated with a layer of fanlike ice crystals including brine infiltration over a rough interface. The results with the frost cover suggest a significant increase in scattering coefficients and a polarimetric signature closer to isotropic characteristics compared to the thin saline ice case.

  3. Disordered quivers and cold horizons

    DOE PAGES

    Anninos, Dionysios; Anous, Tarek; Denef, Frederik

    2016-12-15

    We analyze the low temperature structure of a supersymmetric quiver quantum mechanics with randomized superpotential coefficients, treating them as quenched disorder. These theories describe features of the low energy dynamics of wrapped branes, which in large number backreact into extremal black holes. We show that the low temperature theory, in the limit of a large number of bifundamentals, exhibits a time reparametrization symmetry as well as a specific heat linear in the temperature. Both these features resemble the behavior of black hole horizons in the zero temperature limit. We demonstrate similarities between the low temperature physics of the random quivermore » model and a theory of large N free fermions with random masses.« less

  4. Deterministic diffusion in flower-shaped billiards.

    PubMed

    Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre

    2002-08-01

    We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.

  5. Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos

    PubMed Central

    Santonja, F.; Chen-Charpentier, B.

    2012-01-01

    Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889

  6. Isolating the cow-specific part of residual energy intake in lactating dairy cows using random regressions.

    PubMed

    Fischer, A; Friggens, N C; Berry, D P; Faverdin, P

    2018-07-01

    The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of measured NEI. Among all compared mixed models, the variance of the cow-specific part of predicted NEI represented between 53% and 59% of the variance of REI estimated from the lactation average model or between 4% and 5% of the variance of measured NEI. The remaining 41% to 47% of the variance of REI estimated with the lactation average model may therefore reflect model fitting errors or measurement errors. In conclusion, the use of a mixed model framework with cow-specific random regressions seems to be a promising method to isolate the cow-specific component of REI in dairy cows.

  7. Individualizing drug dosage with longitudinal data.

    PubMed

    Zhu, Xiaolu; Qu, Annie

    2016-10-30

    We propose a two-step procedure to personalize drug dosage over time under the framework of a log-linear mixed-effect model. We model patients' heterogeneity using subject-specific random effects, which are treated as the realizations of an unspecified stochastic process. We extend the conditional quadratic inference function to estimate both fixed-effect coefficients and individual random effects on a longitudinal training data sample in the first step and propose an adaptive procedure to estimate new patients' random effects and provide dosage recommendations for new patients in the second step. An advantage of our approach is that we do not impose any distribution assumption on estimating random effects. Moreover, the new approach can accommodate more general time-varying covariates corresponding to random effects. We show in theory and numerical studies that the proposed method is more efficient compared with existing approaches, especially when covariates are time varying. In addition, a real data example of a clozapine study confirms that our two-step procedure leads to more accurate drug dosage recommendations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Residential self-selection bias in the estimation of built environment effects on physical activity between adolescence and young adulthood.

    PubMed

    Boone-Heinonen, Janne; Guilkey, David K; Evenson, Kelly R; Gordon-Larsen, Penny

    2010-10-04

    Built environment research is dominated by cross-sectional designs, which are particularly vulnerable to residential self-selection bias resulting from health-related attitudes, neighborhood preferences, or other unmeasured characteristics related to both neighborhood choice and health-related outcomes. We used cohort data from the National Longitudinal Study of Adolescent Health (United States; Wave I, 1994-95; Wave III, 2001-02; n = 12,701) and a time-varying geographic information system. Longitudinal relationships between moderate to vigorous physical activity (MVPA) bouts and built and socioeconomic environment measures (landcover diversity, pay and public physical activity facilities per 10,000 population, street connectivity, median household income, and crime rate) from adolescence to young adulthood were estimated using random effects models (biased by unmeasured confounders) and fixed effects models (within-person estimator, which adjusts for unmeasured confounders that are stable over time). Random effects models yielded null associations except for negative crime-MVPA associations [coefficient (95% CI): -0.056 (-0.083, -0.029) in males, -0.061 (-0.090, -0.033) in females]. After controlling for measured and time invariant unmeasured characteristics using within-person estimators, MVPA was higher with greater physical activity pay facilities in males [coefficient (95% CI): 0.024 (0.006, 0.042)], and lower with higher crime rates in males [coefficient (95% CI): -0.107 (-0.140, -0.075)] and females [coefficient (95% CI): -0.046 (-0.083, -0.009)]. Other associations were null or in the counter-intuitive direction. Comparison of within-person estimates to estimates unadjusted for unmeasured characteristics suggest that residential self-selection can bias associations toward the null, as opposed to its typical characterization as a positive confounder. Differential environment-MVPA associations by residential relocation suggest that studies examining changes following residential relocation may be vulnerable to selection bias. The authors discuss complexities of adjusting for residential self-selection and residential relocation, particularly during the adolescent to young adult transition.

  9. Random Walk Simulation of the MRI Apparent Diffusion Coefficient in a Geometrical Model of the Acinar Tree

    PubMed Central

    Pérez-Sánchez, José M.; Rodríguez, Ignacio; Ruiz-Cabello, Jesús

    2009-01-01

    Abstract Apparent diffusion coefficient (ADC) measurement in the lung using gas magnetic resonance imaging is a promising technique with potential for reflecting changes in lung microstructure. Despite some recent impressive human applications, full interpretation of ADC measures remains an elusive goal, due to a lack of detailed knowledge about the structure dependency of ADC. In an attempt to fill this gap we have performed random walk simulations in a three-dimensional geometrical model of the lung acinus, the distal alveolated sections of the lung tree accounting for ∼90% of the total lung volume. Simulations were carried out adjusting model parameters after published morphological data for the rat peripheral airway system, which predict an ADC behavior as microstructure changes with lung inflation in partial agreement with measured ADCs at different airway pressures. The approach used to relate experimental ADCs to lung microstructural changes does not make any assumption about the cause of the changes, so it could be applied to other scenarios such as chronic obstructive pulmonary disease, lung development, etc. The work presented here predicts numerically for the first time ADC values measured in the lung from independent morphological measures of lung microstructure taken at different inflation stages during the breath cycle. PMID:19619480

  10. Development of an Uncertainty Model for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walter, Joel A.; Lawrence, William R.; Elder, David W.; Treece, Michael D.

    2010-01-01

    This paper introduces an uncertainty model being developed for the National Transonic Facility (NTF). The model uses a Monte Carlo technique to propagate standard uncertainties of measured values through the NTF data reduction equations to calculate the combined uncertainties of the key aerodynamic force and moment coefficients and freestream properties. The uncertainty propagation approach to assessing data variability is compared with ongoing data quality assessment activities at the NTF, notably check standard testing using statistical process control (SPC) techniques. It is shown that the two approaches are complementary and both are necessary tools for data quality assessment and improvement activities. The SPC approach is the final arbiter of variability in a facility. Its result encompasses variation due to people, processes, test equipment, and test article. The uncertainty propagation approach is limited mainly to the data reduction process. However, it is useful because it helps to assess the causes of variability seen in the data and consequently provides a basis for improvement. For example, it is shown that Mach number random uncertainty is dominated by static pressure variation over most of the dynamic pressure range tested. However, the random uncertainty in the drag coefficient is generally dominated by axial and normal force uncertainty with much less contribution from freestream conditions.

  11. Smooth Scalar-on-Image Regression via Spatial Bayesian Variable Selection

    PubMed Central

    Goldsmith, Jeff; Huang, Lei; Crainiceanu, Ciprian M.

    2013-01-01

    We develop scalar-on-image regression models when images are registered multidimensional manifolds. We propose a fast and scalable Bayes inferential procedure to estimate the image coefficient. The central idea is the combination of an Ising prior distribution, which controls a latent binary indicator map, and an intrinsic Gaussian Markov random field, which controls the smoothness of the nonzero coefficients. The model is fit using a single-site Gibbs sampler, which allows fitting within minutes for hundreds of subjects with predictor images containing thousands of locations. The code is simple and is provided in less than one page in the Appendix. We apply this method to a neuroimaging study where cognitive outcomes are regressed on measures of white matter microstructure at every voxel of the corpus callosum for hundreds of subjects. PMID:24729670

  12. Random Interchange of Magnetic Connectivity

    NASA Astrophysics Data System (ADS)

    Matthaeus, W. H.; Ruffolo, D. J.; Servidio, S.; Wan, M.; Rappazzo, A. F.

    2015-12-01

    Magnetic connectivity, the connection between two points along a magnetic field line, has a stochastic character associated with field lines random walking in space due to magnetic fluctuations, but connectivity can also change in time due to dynamical activity [1]. For fluctuations transverse to a strong mean field, this connectivity change be caused by stochastic interchange due to component reconnection. The process may be understood approximately by formulating a diffusion-like Fokker-Planck coefficient [2] that is asymptotically related to standard field line random walk. Quantitative estimates are provided, for transverse magnetic field models and anisotropic models such as reduced magnetohydrodynamics. In heliospheric applications, these estimates may be useful for understanding mixing between open and close field line regions near coronal hole boundaries, and large latitude excursions of connectivity associated with turbulence. [1] A. F. Rappazzo, W. H. Matthaeus, D. Ruffolo, S. Servidio & M. Velli, ApJL, 758, L14 (2012) [2] D. Ruffolo & W. Matthaeus, ApJ, 806, 233 (2015)

  13. Numerical approach of the injection molding process of fiber-reinforced composite with considering fiber orientation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen Thi, T. B., E-mail: thanhbinh.skku@gmail.com, E-mail: yokoyama@kit.ac.jp; Yokoyama, A., E-mail: thanhbinh.skku@gmail.com, E-mail: yokoyama@kit.ac.jp; Ota, K., E-mail: kei-ota@toyobo.jp, E-mail: katsuhiro-kodama@toyobo.jp, E-mail: katsuhisa-yamashita@toyobo.jp, E-mail: yumiko-isogai@toyobo.jp, E-mail: kenji-furuichi@toyobo.jp, E-mail: chisato-nonomura@toyobo.jp

    2014-05-15

    One of the most important challenges in the injection molding process of the short-glass fiber/thermoplastic composite parts is being able to predict the fiber orientation, since it controls the mechanical and the physical properties of the final parts. Folgar and Tucker included into the Jeffery equation a diffusive type of term, which introduces a phenomenological coefficient for modeling the randomizing effect of the mechanical interactions between the fibers, to predict the fiber orientation in concentrated suspensions. Their experiments indicated that this coefficient depends on the fiber volume fraction and aspect ratio. However, a definition of the fiber interaction coefficient, whichmore » is very necessary in the fiber orientation simulations, hasn't still been proven yet. Consequently, this study proposed a developed fiber interaction model that has been introduced a fiber dynamics simulation in order to obtain a global fiber interaction coefficient. This supposed that the coefficient is a sum function of the fiber concentration, aspect ratio, and angular velocity. The proposed model was incorporated into a computer aided engineering simulation package C-Mold. Short-glass fiber/polyamide-6 composites were produced in the injection molding with the fiber weight concentration of 30 wt.%, 50 wt.%, and 70 wt.%. The physical properties of these composites were examined, and their fiber orientation distributions were measured by micro-computed-tomography equipment μ-CT. The simulation results showed a good agreement with experiment results.« less

  14. A New Metric for Quantifying Performance Impairment on the Psychomotor Vigilance Test

    DTIC Science & Technology

    2012-01-01

    used the coefficient of determination (R2) and the P-values based on Bartelss test of randomness of the residual error to quantify the goodness - of - fit ...we used the goodness - of - fit between each metric and the corresponding individualized two-process model output (Rajaraman et al., 2008, 2009) to assess...individualized two-process model fits for each of the 12 subjects using the five metrics. The P-values are for Bartelss

  15. An enhanced export coefficient based optimization model for supporting agricultural nonpoint source pollution mitigation under uncertainty.

    PubMed

    Rong, Qiangqiang; Cai, Yanpeng; Chen, Bing; Yue, Wencong; Yin, Xin'an; Tan, Qian

    2017-02-15

    In this research, an export coefficient based dual inexact two-stage stochastic credibility constrained programming (ECDITSCCP) model was developed through integrating an improved export coefficient model (ECM), interval linear programming (ILP), fuzzy credibility constrained programming (FCCP) and a fuzzy expected value equation within a general two stage programming (TSP) framework. The proposed ECDITSCCP model can effectively address multiple uncertainties expressed as random variables, fuzzy numbers, pure and dual intervals. Also, the model can provide a direct linkage between pre-regulated management policies and the associated economic implications. Moreover, the solutions under multiple credibility levels can be obtained for providing potential decision alternatives for decision makers. The proposed model was then applied to identify optimal land use structures for agricultural NPS pollution mitigation in a representative upstream subcatchment of the Miyun Reservoir watershed in north China. Optimal solutions of the model were successfully obtained, indicating desired land use patterns and nutrient discharge schemes to get a maximum agricultural system benefits under a limited discharge permit. Also, numerous results under multiple credibility levels could provide policy makers with several options, which could help get an appropriate balance between system benefits and pollution mitigation. The developed ECDITSCCP model can be effectively applied to addressing the uncertain information in agricultural systems and shows great applicability to the land use adjustment for agricultural NPS pollution mitigation. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Integrative neural networks model for prediction of sediment rating curve parameters for ungauged basins

    NASA Astrophysics Data System (ADS)

    Atieh, M.; Mehltretter, S. L.; Gharabaghi, B.; Rudra, R.

    2015-12-01

    One of the most uncertain modeling tasks in hydrology is the prediction of ungauged stream sediment load and concentration statistics. This study presents integrated artificial neural networks (ANN) models for prediction of sediment rating curve parameters (rating curve coefficient α and rating curve exponent β) for ungauged basins. The ANN models integrate a comprehensive list of input parameters to improve the accuracy achieved; the input parameters used include: soil, land use, topographic, climatic, and hydrometric data sets. The ANN models were trained on the randomly selected 2/3 of the dataset of 94 gauged streams in Ontario, Canada and validated on the remaining 1/3. The developed models have high correlation coefficients of 0.92 and 0.86 for α and β, respectively. The ANN model for the rating coefficient α is directly proportional to rainfall erosivity factor, soil erodibility factor, and apportionment entropy disorder index, whereas it is inversely proportional to vegetation cover and mean annual snowfall. The ANN model for the rating exponent β is directly proportional to mean annual precipitation, the apportionment entropy disorder index, main channel slope, standard deviation of daily discharge, and inversely proportional to the fraction of basin area covered by wetlands and swamps. Sediment rating curves are essential tools for the calculation of sediment load, concentration-duration curve (CDC), and concentration-duration-frequency (CDF) analysis for more accurate assessment of water quality for ungauged basins.

  17. A Model with Ellipsoidal Scatterers for Polarimetric Remote Sensing of Anisotropic Layered Media

    NASA Technical Reports Server (NTRS)

    Nghiem, S. V.; Kwok, R.; Kong, J. A.; Shin, R. T.

    1993-01-01

    This paper presents a model with ellipsoidal scatterers for applications to polarimetric remote sensing of anisotropic layered media at microwave frequencies. The physical configuration includes an isotropic layer covering an anisotropic layer above a homogeneous half space. The isotropic layer consists of randomly oriented spheroids. The anisotropic layer contains ellipsoidal scatterers with a preferential vertical alignment and random azimuthal orientations. Effective permittivities of the scattering media are calculated with the strong fluctuation theory extended to account for the nonspherical shapes and the scatterer orientation distributions. On the basis of the analytic wave theory, dyadic Green's functions for layered media are used to derive polarimetric backscattering coefficients under the distorted Born approximation. The ellipsoidal shape of the scatterers gives rise to nonzero cross-polarized returns from the untilted anisotropic medium in the first-order approximation. Effects of rough interfaces are estimated by an incoherent addition method. Theoretical results and experimental data are matched at 9 GHz for thick first-year sea ice with a bare surface and with a snow cover at Point Barrow, Alaska. The model is then used to study the sensitivity of polarimetric backscattering coefficients with respect to correlation lengths representing the geometry of brine inclusions. Polarimetric signatures of bare and snow-covered sea ice are also simulated based on the model to investigate effects of different scattering mechanisms.

  18. Application of randomly oriented spheroids for retrieval of dust particle parameters from multiwavelength lidar measurements

    NASA Astrophysics Data System (ADS)

    Veselovskii, I.; Dubovik, O.; Kolgotin, A.; Lapyonok, T.; di Girolamo, P.; Summa, D.; Whiteman, D. N.; Mishchenko, M.; Tanré, D.

    2010-11-01

    Multiwavelength (MW) Raman lidars have demonstrated their potential to profile particle parameters; however, until now, the physical models used in retrieval algorithms for processing MW lidar data have been predominantly based on the Mie theory. This approach is applicable to the modeling of light scattering by spherically symmetric particles only and does not adequately reproduce the scattering by generally nonspherical desert dust particles. Here we present an algorithm based on a model of randomly oriented spheroids for the inversion of multiwavelength lidar data. The aerosols are modeled as a mixture of two aerosol components: one composed only of spherical and the second composed of nonspherical particles. The nonspherical component is an ensemble of randomly oriented spheroids with size-independent shape distribution. This approach has been integrated into an algorithm retrieving aerosol properties from the observations with a Raman lidar based on a tripled Nd:YAG laser. Such a lidar provides three backscattering coefficients, two extinction coefficients, and the particle depolarization ratio at a single or multiple wavelengths. Simulations were performed for a bimodal particle size distribution typical of desert dust particles. The uncertainty of the retrieved particle surface, volume concentration, and effective radius for 10% measurement errors is estimated to be below 30%. We show that if the effect of particle nonsphericity is not accounted for, the errors in the retrieved aerosol parameters increase notably. The algorithm was tested with experimental data from a Saharan dust outbreak episode, measured with the BASIL multiwavelength Raman lidar in August 2007. The vertical profiles of particle parameters as well as the particle size distributions at different heights were retrieved. It was shown that the algorithm developed provided substantially reasonable results consistent with the available independent information about the observed aerosol event.

  19. A Mathematical Model for Storage and Recall of Images using Targeted Synchronization of Coupled Maps.

    PubMed

    Palaniyandi, P; Rangarajan, Govindan

    2017-08-21

    We propose a mathematical model for storage and recall of images using coupled maps. We start by theoretically investigating targeted synchronization in coupled map systems wherein only a desired (partial) subset of the maps is made to synchronize. A simple method is introduced to specify coupling coefficients such that targeted synchronization is ensured. The principle of this method is extended to storage/recall of images using coupled Rulkov maps. The process of adjusting coupling coefficients between Rulkov maps (often used to model neurons) for the purpose of storing a desired image mimics the process of adjusting synaptic strengths between neurons to store memories. Our method uses both synchronisation and synaptic weight modification, as the human brain is thought to do. The stored image can be recalled by providing an initial random pattern to the dynamical system. The storage and recall of the standard image of Lena is explicitly demonstrated.

  20. Relatedness in spatially structured populations with empty sites: An approach based on spatial moment equations.

    PubMed

    Lion, Sébastien

    2009-09-07

    Taking into account the interplay between spatial ecological dynamics and selection is a major challenge in evolutionary ecology. Although inclusive fitness theory has proven to be a very useful tool to unravel the interactions between spatial genetic structuring and selection, applications of the theory usually rely on simplifying demographic assumptions. In this paper, I attempt to bridge the gap between spatial demographic models and kin selection models by providing a method to compute approximations for relatedness coefficients in a spatial model with empty sites. Using spatial moment equations, I provide an approximation of nearest-neighbour relatedness on random regular networks, and show that this approximation performs much better than the ordinary pair approximation. I discuss the connection between the relatedness coefficients I define and those used in population genetics, and sketch some potential extensions of the theory.

  1. A study of sound absorption by street canyon boundaries and asphalt rubber concrete pavement

    NASA Astrophysics Data System (ADS)

    Drysdale, Graeme Robert

    A sound field model, based on a classical diffusion equation, is extended to account for sound absorption in a diffusion parameter used to model sound energy in a narrow street canyon. The model accounts for a single sound absorption coefficient, separate accommodation coefficients and a combination of separate absorption and accommodation coefficients from parallel canyon walls. The new expressions are compared to the original formula through numerical simulations to reveal the effect of absorption on sound diffusion. The newly established analytical formulae demonstrate satisfactory agreement with their predecessor under perfect reflection. As well, the influence of the extended diffusion parameter on normalized sound pressure levels in a narrow street canyon is in agreement with experimental data. The diffusion parameters are used to model sound energy density in a street canyon as a function of the sound absorption coefficient of the street canyon walls. The acoustic and material properties of conventional and asphalt rubber concrete (ARC) pavement are also studied to assess how the crumb rubber content influences sound absorption in street canyons. The porosity and absolute permeability of compacted specimens of asphalt rubber concrete are measured and compared to their normal and random incidence sound absorption coefficients as a function of crumb rubber content in the modified binder. Nonlinear trends are found between the sound absorption coefficients, porosity and absolute permeability of the compacted specimens and the percentage of crumb rubber in the modified binders. The cross-sectional areas of the air voids on the surfaces of the compacted specimens are measured using digital image processing techniques and a linear relationship is obtained between the average void area and crumb rubber content. The measured material properties are used to construct an empirical formula relating the average porosity, normal incidence noise reduction coefficients and percentage of crumb rubber in the modified binder of the compacted specimens.

  2. Establishment of a Physical Model for Solute Diffusion in Hydrogel: Understanding the Diffusion of Proteins in Poly(sulfobetaine methacrylate) Hydrogel.

    PubMed

    Zhou, Yuhang; Li, Junjie; Zhang, Ying; Dong, Dianyu; Zhang, Ershuai; Ji, Feng; Qin, Zhihui; Yang, Jun; Yao, Fanglian

    2017-02-02

    Prediction of the diffusion coefficient of solute, especially bioactive molecules, in hydrogel is significant in the biomedical field. Considering the randomness of solute movement in a hydrogel network, a physical diffusion RMP-1 model based on obstruction theory was established in this study. The physical properties of the solute and the polymer chain and their interactions were introduced into this model. Furthermore, models RMP-2 and RMP-3 were established to understand and predict the diffusion behaviors of proteins in hydrogel. In addition, zwitterionic poly(sulfobetaine methacrylate) (PSBMA) hydrogels with wide range and fine adjustable mesh sizes were prepared and used as efficient experimental platforms for model validation. The Flory characteristic ratios, Flory-Huggins parameter, mesh size, and polymer chain radii of PSBMA hydrogels were determined. The diffusion coefficients of the proteins (bovine serum albumin, immunoglobulin G, and lysozyme) in PSBMA hydrogels were studied by the fluorescence recovery after photobleaching technique. The measured diffusion coefficients were compared with the predictions of obstruction models, and it was found that our model presented an excellent predictive ability. Furthermore, the assessment of our model revealed that protein diffusion in PSBMA hydrogel would be affected by the physical properties of the protein and the PSBMA network. It was also confirmed that the diffusion behaviors of protein in zwitterionic hydrogels can be adjusted by changing the cross-linking density of the hydrogel and the ionic strength of the swelling medium. Our model is expected to possess accurate predictive ability for the diffusion coefficient of solute in hydrogel, which will be widely used in the biomedical field.

  3. Harmonic regression of Landsat time series for modeling attributes from national forest inventory data

    NASA Astrophysics Data System (ADS)

    Wilson, Barry T.; Knight, Joseph F.; McRoberts, Ronald E.

    2018-03-01

    Imagery from the Landsat Program has been used frequently as a source of auxiliary data for modeling land cover, as well as a variety of attributes associated with tree cover. With ready access to all scenes in the archive since 2008 due to the USGS Landsat Data Policy, new approaches to deriving such auxiliary data from dense Landsat time series are required. Several methods have previously been developed for use with finer temporal resolution imagery (e.g. AVHRR and MODIS), including image compositing and harmonic regression using Fourier series. The manuscript presents a study, using Minnesota, USA during the years 2009-2013 as the study area and timeframe. The study examined the relative predictive power of land cover models, in particular those related to tree cover, using predictor variables based solely on composite imagery versus those using estimated harmonic regression coefficients. The study used two common non-parametric modeling approaches (i.e. k-nearest neighbors and random forests) for fitting classification and regression models of multiple attributes measured on USFS Forest Inventory and Analysis plots using all available Landsat imagery for the study area and timeframe. The estimated Fourier coefficients developed by harmonic regression of tasseled cap transformation time series data were shown to be correlated with land cover, including tree cover. Regression models using estimated Fourier coefficients as predictor variables showed a two- to threefold increase in explained variance for a small set of continuous response variables, relative to comparable models using monthly image composites. Similarly, the overall accuracies of classification models using the estimated Fourier coefficients were approximately 10-20 percentage points higher than the models using the image composites, with corresponding individual class accuracies between six and 45 percentage points higher.

  4. Random matrix theory analysis of cross-correlations in the US stock market: Evidence from Pearson’s correlation coefficient and detrended cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Wang, Gang-Jin; Xie, Chi; Chen, Shou; Yang, Jiao-Jiao; Yang, Ming-Yan

    2013-09-01

    In this study, we first build two empirical cross-correlation matrices in the US stock market by two different methods, namely the Pearson’s correlation coefficient and the detrended cross-correlation coefficient (DCCA coefficient). Then, combining the two matrices with the method of random matrix theory (RMT), we mainly investigate the statistical properties of cross-correlations in the US stock market. We choose the daily closing prices of 462 constituent stocks of S&P 500 index as the research objects and select the sample data from January 3, 2005 to August 31, 2012. In the empirical analysis, we examine the statistical properties of cross-correlation coefficients, the distribution of eigenvalues, the distribution of eigenvector components, and the inverse participation ratio. From the two methods, we find some new results of the cross-correlations in the US stock market in our study, which are different from the conclusions reached by previous studies. The empirical cross-correlation matrices constructed by the DCCA coefficient show several interesting properties at different time scales in the US stock market, which are useful to the risk management and optimal portfolio selection, especially to the diversity of the asset portfolio. It will be an interesting and meaningful work to find the theoretical eigenvalue distribution of a completely random matrix R for the DCCA coefficient because it does not obey the Marčenko-Pastur distribution.

  5. Measuring multivariate association and beyond

    PubMed Central

    Josse, Julie; Holmes, Susan

    2017-01-01

    Simple correlation coefficients between two variables have been generalized to measure association between two matrices in many ways. Coefficients such as the RV coefficient, the distance covariance (dCov) coefficient and kernel based coefficients are being used by different research communities. Scientists use these coefficients to test whether two random vectors are linked. Once it has been ascertained that there is such association through testing, then a next step, often ignored, is to explore and uncover the association’s underlying patterns. This article provides a survey of various measures of dependence between random vectors and tests of independence and emphasizes the connections and differences between the various approaches. After providing definitions of the coefficients and associated tests, we present the recent improvements that enhance their statistical properties and ease of interpretation. We summarize multi-table approaches and provide scenarii where the indices can provide useful summaries of heterogeneous multi-block data. We illustrate these different strategies on several examples of real data and suggest directions for future research. PMID:29081877

  6. Hidden Markov random field model and Broyden-Fletcher-Goldfarb-Shanno algorithm for brain image segmentation

    NASA Astrophysics Data System (ADS)

    Guerrout, EL-Hachemi; Ait-Aoudia, Samy; Michelucci, Dominique; Mahiou, Ramdane

    2018-05-01

    Many routine medical examinations produce images of patients suffering from various pathologies. With the huge number of medical images, the manual analysis and interpretation became a tedious task. Thus, automatic image segmentation became essential for diagnosis assistance. Segmentation consists in dividing the image into homogeneous and significant regions. We focus on hidden Markov random fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimisation problem. Broyden-Fletcher-Goldfarb-Shanno algorithm referred to as BFGS is one of the most powerful methods to solve unconstrained optimisation problem. In this paper, we investigate the combination of HMRF and BFGS algorithm to perform the segmentation operation. The proposed method shows very good segmentation results comparing with well-known approaches. The tests are conducted on brain magnetic resonance image databases (BrainWeb and IBSR) largely used to objectively confront the results obtained. The well-known Dice coefficient (DC) was used as similarity metric. The experimental results show that, in many cases, our proposed method approaches the perfect segmentation with a Dice Coefficient above .9. Moreover, it generally outperforms other methods in the tests conducted.

  7. Scattering from randomly oriented circular discs with application to vegetation

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Fung, A. K.

    1984-01-01

    A vegetation layer is modeled by a collection of randomly oriented circular discs over a half space. The backscattering coefficient from such a half space is computed using the radiative transfer theory. It is shown that significantly different results are obtained from this theory as compared with some earlier investigations using the same modeling approach but with restricted disc orientations. In particular, the backscattered cross polarized returns cannot have a fast increasing angular trend which is inconsistent with measurements. By setting the appropriate angle of orientation to zero the theory reduces to previously published results. Comparisons are shown with measurements taken from milo, corn and wheat and good agreements are obtained for both polarized and cross polarized returns.

  8. Scattering from randomly oriented circular discs with application to vegetation

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Fung, A. K.

    1983-01-01

    A vegetation layer is modeled by a collection of randomly oriented circular discs over a half space. The backscattering coefficient from such a half space is computed using the radiative transfer theory. It is shown that significantly different results are obtained from this theory as compared with some earlier investigations using the same modeling approach but with restricted disc orientations. In particular, the backscattered cross-polarized returns cannot have a fast increasing angular trend which is inconsistent with measurements. By setting the appropriate angle of orientation to zero the theory reduces to previously published results. Comparisons are shown with measurements taken from milo, corn and wheat and good agreements are obtained for both polarized and cross-polarized returns.

  9. Collision mechanics and the structure of planetary ring edges

    NASA Technical Reports Server (NTRS)

    Spaute, Dominique; Greenberg, Richard

    1987-01-01

    The present numerical simulation of collisional evolution, in the case of a hypothetical ring whose parameters are modeled after those of Saturn's rings, gives attention to changes in radial structure near the ring edges and notes that when random motion is in equilibrium, the rings tend to spread in order to conserve angular momentum while energy is dissipated in collisions. As long as random motion is damped, ring edges may contract rather than spread, producing a concentration of material at the ring edges. For isotropic scattering, damping dominates for a coefficient of restitution of velocity value of up to 0.83.

  10. Superdiffusive Dispersals Impart the Geometry of Underlying Random Walks

    NASA Astrophysics Data System (ADS)

    Zaburdaev, V.; Fouxon, I.; Denisov, S.; Barkai, E.

    2016-12-01

    It is recognized now that a variety of real-life phenomena ranging from diffusion of cold atoms to the motion of humans exhibit dispersal faster than normal diffusion. Lévy walks is a model that excelled in describing such superdiffusive behaviors albeit in one dimension. Here we show that, in contrast to standard random walks, the microscopic geometry of planar superdiffusive Lévy walks is imprinted in the asymptotic distribution of the walkers. The geometry of the underlying walk can be inferred from trajectories of the walkers by calculating the analogue of the Pearson coefficient.

  11. A dynamic subgrid-scale parameterization of the effective wall stress in atmospheric boundary layer flows over multiscale, fractal-like surfaces

    NASA Astrophysics Data System (ADS)

    Anderson, William; Meneveau, Charles

    2010-05-01

    A dynamic subgrid-scale (SGS) parameterization for hydrodynamic surface roughness is developed for large-eddy simulation (LES) of atmospheric boundary layer (ABL) flow over multiscale, fractal-like surfaces. The model consists of two parts. First, a baseline model represents surface roughness at horizontal length-scales that can be resolved in the LES. This model takes the form of a force using a prescribed drag coefficient. This approach is tested in LES of flow over cubes, wavy surfaces, and ellipsoidal roughness elements for which there are detailed experimental data available. Secondly, a dynamic roughness model is built, accounting for SGS surface details of finer resolution than the LES grid width. The SGS boundary condition is based on the logarithmic law of the wall, where the unresolved roughness of the surface is modeled as the product of local root-mean-square (RMS) of the unresolved surface height and an unknown dimensionless model coefficient. This coefficient is evaluated dynamically by comparing the plane-average hydrodynamic drag at two resolutions (grid- and test-filter scale, Germano et al., 1991). The new model is tested on surfaces generated through superposition of random-phase Fourier modes with prescribed, power-law surface-height spectra. The results show that the method yields convergent results and correct trends. Limitations and further challenges are highlighted. Supported by the US National Science Foundation (EAR-0609690).

  12. Pendulum Mass Affects the Measurement of Articular Friction Coefficient

    PubMed Central

    Akelman, Matthew R.; Teeple, Erin; Machan, Jason T.; Crisco, Joseph J.; Jay, Gregory D.; Fleming, Braden C.

    2012-01-01

    Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton’s equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton’s model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n = 4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton’s equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. PMID:23122223

  13. Pendulum mass affects the measurement of articular friction coefficient.

    PubMed

    Akelman, Matthew R; Teeple, Erin; Machan, Jason T; Crisco, Joseph J; Jay, Gregory D; Fleming, Braden C

    2013-02-01

    Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton's equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton's model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n=4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton's equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Free variable selection QSPR study to predict 19F chemical shifts of some fluorinated organic compounds using Random Forest and RBF-PLS methods

    NASA Astrophysics Data System (ADS)

    Goudarzi, Nasser

    2016-04-01

    In this work, two new and powerful chemometrics methods are applied for the modeling and prediction of the 19F chemical shift values of some fluorinated organic compounds. The radial basis function-partial least square (RBF-PLS) and random forest (RF) are employed to construct the models to predict the 19F chemical shifts. In this study, we didn't used from any variable selection method and RF method can be used as variable selection and modeling technique. Effects of the important parameters affecting the ability of the RF prediction power such as the number of trees (nt) and the number of randomly selected variables to split each node (m) were investigated. The root-mean-square errors of prediction (RMSEP) for the training set and the prediction set for the RBF-PLS and RF models were 44.70, 23.86, 29.77, and 23.69, respectively. Also, the correlation coefficients of the prediction set for the RBF-PLS and RF models were 0.8684 and 0.9313, respectively. The results obtained reveal that the RF model can be used as a powerful chemometrics tool for the quantitative structure-property relationship (QSPR) studies.

  15. Regularity of beating of small clusters of embryonic chick ventricular heart-cells: experiment vs. stochastic single-channel population model

    NASA Astrophysics Data System (ADS)

    Krogh-Madsen, Trine; Kold Taylor, Louise; Skriver, Anne D.; Schaffer, Peter; Guevara, Michael R.

    2017-09-01

    The transmembrane potential is recorded from small isopotential clusters of 2-4 embryonic chick ventricular cells spontaneously generating action potentials. We analyze the cycle-to-cycle fluctuations in the time between successive action potentials (the interbeat interval or IBI). We also convert an existing model of electrical activity in the cluster, which is formulated as a Hodgkin-Huxley-like deterministic system of nonlinear ordinary differential equations describing five individual ionic currents, into a stochastic model consisting of a population of ˜20 000 independently and randomly gating ionic channels, with the randomness being set by a real physical stochastic process (radio static). This stochastic model, implemented using the Clay-DeFelice algorithm, reproduces the fluctuations seen experimentally: e.g., the coefficient of variation (standard deviation/mean) of IBI is 4.3% in the model vs. the 3.9% average value of the 17 clusters studied. The model also replicates all but one of several other quantitative measures of the experimental results, including the power spectrum and correlation integral of the voltage, as well as the histogram, Poincaré plot, serial correlation coefficients, power spectrum, detrended fluctuation analysis, approximate entropy, and sample entropy of IBI. The channel noise from one particular ionic current (IKs), which has channel kinetics that are relatively slow compared to that of the other currents, makes the major contribution to the fluctuations in IBI. Reproduction of the experimental coefficient of variation of IBI by adding a Gaussian white noise-current into the deterministic model necessitates using an unrealistically high noise-current amplitude. Indeed, a major implication of the modelling results is that, given the wide range of time-scales over which the various species of channels open and close, only a cell-specific stochastic model that is formulated taking into consideration the widely different ranges in the frequency content of the channel-noise produced by the opening and closing of several different types of channels will be able to reproduce precisely the various effects due to membrane noise seen in a particular electrophysiological preparation.

  16. Active motion assisted by correlated stochastic torques.

    PubMed

    Weber, Christian; Radtke, Paul K; Schimansky-Geier, Lutz; Hänggi, Peter

    2011-07-01

    The stochastic dynamics of an active particle undergoing a constant speed and additionally driven by an overall fluctuating torque is investigated. The random torque forces are expressed by a stochastic differential equation for the angular dynamics of the particle determining the orientation of motion. In addition to a constant torque, the particle is supplemented by random torques, which are modeled as an Ornstein-Uhlenbeck process with given correlation time τ(c). These nonvanishing correlations cause a persistence of the particles' trajectories and a change of the effective spatial diffusion coefficient. We discuss the mean square displacement as a function of the correlation time and the noise intensity and detect a nonmonotonic dependence of the effective diffusion coefficient with respect to both correlation time and noise strength. A maximal diffusion behavior is obtained if the correlated angular noise straightens the curved trajectories, interrupted by small pirouettes, whereby the correlated noise amplifies a straightening of the curved trajectories caused by the constant torque.

  17. A new interpretation of the Keller-Segel model based on multiphase modelling.

    PubMed

    Byrne, Helen M; Owen, Markus R

    2004-12-01

    In this paper an alternative derivation and interpretation are presented of the classical Keller-Segel model of cell migration due to random motion and chemotaxis. A multiphase modelling approach is used to describe how a population of cells moves through a fluid containing a diffusible chemical to which the cells are attracted. The cells and fluid are viewed as distinct components of a two-phase mixture. The principles of mass and momentum balance are applied to each phase, and appropriate constitutive laws imposed to close the resulting equations. A key assumption here is that the stress in the cell phase is influenced by the concentration of the diffusible chemical. By restricting attention to one-dimensional cartesian geometry we show how the model reduces to a pair of nonlinear coupled partial differential equations for the cell density and the chemical concentration. These equations may be written in the form of the Patlak-Keller-Segel model, naturally including density-dependent nonlinearities in the cell motility coefficients. There is a direct relationship between the random motility and chemotaxis coefficients, both depending in an inter-related manner on the chemical concentration. We suggest that this may explain why many chemicals appear to stimulate both chemotactic and chemokinetic responses in cell populations. After specialising our model to describe slime mold we then show how the functional form of the chemical potential that drives cell locomotion influences the ability of the system to generate spatial patterns. The paper concludes with a summary of the key results and a discussion of avenues for future research.

  18. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    PubMed

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  19. Random walk numerical simulation for hopping transport at finite carrier concentrations: diffusion coefficient and transport energy concept.

    PubMed

    Gonzalez-Vazquez, J P; Anta, Juan A; Bisquert, Juan

    2009-11-28

    The random walk numerical simulation (RWNS) method is used to compute diffusion coefficients for hopping transport in a fully disordered medium at finite carrier concentrations. We use Miller-Abrahams jumping rates and an exponential distribution of energies to compute the hopping times in the random walk simulation. The computed diffusion coefficient shows an exponential dependence with respect to Fermi-level and Arrhenius behavior with respect to temperature. This result indicates that there is a well-defined transport level implicit to the system dynamics. To establish the origin of this transport level we construct histograms to monitor the energies of the most visited sites. In addition, we construct "corrected" histograms where backward moves are removed. Since these moves do not contribute to transport, these histograms provide a better estimation of the effective transport level energy. The analysis of this concept in connection with the Fermi-level dependence of the diffusion coefficient and the regime of interest for the functioning of dye-sensitised solar cells is thoroughly discussed.

  20. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  1. Estimation of influential points in any data set from coefficient of determination and its leave-one-out cross-validated counterpart.

    PubMed

    Tóth, Gergely; Bodai, Zsolt; Héberger, Károly

    2013-10-01

    Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.

  2. The influence of statistical properties of Fourier coefficients on random Gaussian surfaces.

    PubMed

    de Castro, C P; Luković, M; Andrade, R F S; Herrmann, H J

    2017-05-16

    Many examples of natural systems can be described by random Gaussian surfaces. Much can be learned by analyzing the Fourier expansion of the surfaces, from which it is possible to determine the corresponding Hurst exponent and consequently establish the presence of scale invariance. We show that this symmetry is not affected by the distribution of the modulus of the Fourier coefficients. Furthermore, we investigate the role of the Fourier phases of random surfaces. In particular, we show how the surface is affected by a non-uniform distribution of phases.

  3. Urn models for response-adaptive randomized designs: a simulation study based on a non-adaptive randomized trial.

    PubMed

    Ghiglietti, Andrea; Scarale, Maria Giovanna; Miceli, Rosalba; Ieva, Francesca; Mariani, Luigi; Gavazzi, Cecilia; Paganoni, Anna Maria; Edefonti, Valeria

    2018-03-22

    Recently, response-adaptive designs have been proposed in randomized clinical trials to achieve ethical and/or cost advantages by using sequential accrual information collected during the trial to dynamically update the probabilities of treatment assignments. In this context, urn models-where the probability to assign patients to treatments is interpreted as the proportion of balls of different colors available in a virtual urn-have been used as response-adaptive randomization rules. We propose the use of Randomly Reinforced Urn (RRU) models in a simulation study based on a published randomized clinical trial on the efficacy of home enteral nutrition in cancer patients after major gastrointestinal surgery. We compare results with the RRU design with those previously published with the non-adaptive approach. We also provide a code written with the R software to implement the RRU design in practice. In detail, we simulate 10,000 trials based on the RRU model in three set-ups of different total sample sizes. We report information on the number of patients allocated to the inferior treatment and on the empirical power of the t-test for the treatment coefficient in the ANOVA model. We carry out a sensitivity analysis to assess the effect of different urn compositions. For each sample size, in approximately 75% of the simulation runs, the number of patients allocated to the inferior treatment by the RRU design is lower, as compared to the non-adaptive design. The empirical power of the t-test for the treatment effect is similar in the two designs.

  4. Sample size requirements for the design of reliability studies: precision consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.

  5. Cluster structure in the correlation coefficient matrix can be characterized by abnormal eigenvalues

    NASA Astrophysics Data System (ADS)

    Nie, Chun-Xiao

    2018-02-01

    In a large number of previous studies, the researchers found that some of the eigenvalues of the financial correlation matrix were greater than the predicted values of the random matrix theory (RMT). Here, we call these eigenvalues as abnormal eigenvalues. In order to reveal the hidden meaning of these abnormal eigenvalues, we study the toy model with cluster structure and find that these eigenvalues are related to the cluster structure of the correlation coefficient matrix. In this paper, model-based experiments show that in most cases, the number of abnormal eigenvalues of the correlation matrix is equal to the number of clusters. In addition, empirical studies show that the sum of the abnormal eigenvalues is related to the clarity of the cluster structure and is negatively correlated with the correlation dimension.

  6. Broken Ergodicity in Ideal, Homogeneous, Incompressible Turbulence

    NASA Technical Reports Server (NTRS)

    Morin, Lee; Shebalin, John; Fu, Terry; Nguyen, Phu; Shum, Victor

    2010-01-01

    We discuss the statistical mechanics of numerical models of ideal homogeneous, incompressible turbulence and their relevance for dissipative fluids and magnetofluids. These numerical models are based on Fourier series and the relevant statistical theory predicts that Fourier coefficients of fluid velocity and magnetic fields (if present) are zero-mean random variables. However, numerical simulations clearly show that certain coefficients have a non-zero mean value that can be very large compared to the associated standard deviation. We explain this phenomena in terms of broken ergodicity', which is defined to occur when dynamical behavior does not match ensemble predictions on very long time-scales. We review the theoretical basis of broken ergodicity, apply it to 2-D and 3-D fluid and magnetohydrodynamic simulations of homogeneous turbulence, and show new results from simulations using GPU (graphical processing unit) computers.

  7. Multivariate generalized hidden Markov regression models with random covariates: Physical exercise in an elderly population.

    PubMed

    Punzo, Antonio; Ingrassia, Salvatore; Maruotti, Antonello

    2018-04-22

    A time-varying latent variable model is proposed to jointly analyze multivariate mixed-support longitudinal data. The proposal can be viewed as an extension of hidden Markov regression models with fixed covariates (HMRMFCs), which is the state of the art for modelling longitudinal data, with a special focus on the underlying clustering structure. HMRMFCs are inadequate for applications in which a clustering structure can be identified in the distribution of the covariates, as the clustering is independent from the covariates distribution. Here, hidden Markov regression models with random covariates are introduced by explicitly specifying state-specific distributions for the covariates, with the aim of improving the recovering of the clusters in the data with respect to a fixed covariates paradigm. The hidden Markov regression models with random covariates class is defined focusing on the exponential family, in a generalized linear model framework. Model identifiability conditions are sketched, an expectation-maximization algorithm is outlined for parameter estimation, and various implementation and operational issues are discussed. Properties of the estimators of the regression coefficients, as well as of the hidden path parameters, are evaluated through simulation experiments and compared with those of HMRMFCs. The method is applied to physical activity data. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Defining Dogma: Quantifying Crystalloid Hemodilution in a Prospective Randomized Control Trial with Blood Donation as a Model for Hemorrhage.

    PubMed

    Ross, Samuel Wade; Christmas, A Britton; Fischer, Peter E; Holway, Haley; Seymour, Rachel; Huntington, Ciara R; Heniford, B Todd; Sing, Ronald F

    2018-06-04

    The concept of hemodilution after blood loss and crystalloid infusion is a surgical maxim that remains unproven in humans. We sought to quantify the effect of hemodilution after crystalloid administration in voluntary blood donors as a model for acute hemorrhage. A prospective, randomized control trial was conducted in conjunction with community blood drives. Donors were randomized to receive no IV fluid(noIVF), two liters normal saline(NS), or two liters lactated ringers(LR) after blood donation. Blood samples were taken before donation of 500 mL of blood, immediately after donation, and following IV fluid administration. Hemoglobin(Hgb) was measured at each time point. Hgb between time points were compared between groups using standard statistical tests and the Bonferroni correction for multiple comparisons. Statistical significance was set at p≤0.0167. Of 165 patients consented, 157 patients completed the study. Average pre-donation Hgb was 14.3 g/dL. There was no difference in the mean Hgb levels after blood donation between the three groups(p>0.05). Compared to the control group, there was a significant drop in Hgb in the crystalloid infused groups from the post-donation level to post-resuscitation(13.2 vs 12.1 vs 12.2 g/dL, p<0.0001). A formula was created to predict hemoglobin levels from a given estimated blood loss(EBL) and volume replacement(VR): Hemodilution Hgb=(MeanPre-donation Hgb - hemorrhage Hgb drop - equilibration hemoglobin drop - resuscitation Hgb drop)=MeanPre-donation Hgb - [(EBL/TBV)*l] - [(EBL/TBV)*h] - [(VR/TBV)*r], l = 5.111g/dL = blood loss coefficient, h=6.722 g/dL=equilibration coefficient, r= 2.617g/dL= resuscitation coefficient. This study proves the concept of hemodilution and derived a mathematical relationship between blood loss and resuscitation. This data may help to estimate response of hemoglobin levels to blood loss and fluid resuscitation in clinical practice. Copyright © 2018. Published by Elsevier Inc.

  9. Location and Navigation with Ultra-Wideband Signals

    DTIC Science & Technology

    2012-06-07

    Coherent vs. Noncoherent Combination 26 F Ranging with Multi-Band UWB Signals: Random Phase Ratation 29 F.1 MB-OFDM System Model...adopted to combine the channel information from subbands: the coherent combining and the noncoherent combining. For the coherent combining, estimates of...channel frequency response coefficients for all subbands are jointly used to estimate the time domain channel with Eq. (33). For the noncoherent

  10. Mixed models, linear dependency, and identification in age-period-cohort models.

    PubMed

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Modeled streamflow metrics on small, ungaged stream reaches in the Upper Colorado River Basin

    USGS Publications Warehouse

    Reynolds, Lindsay V.; Shafroth, Patrick B.

    2016-01-20

    Modeling streamflow is an important approach for understanding landscape-scale drivers of flow and estimating flows where there are no streamgage records. In this study conducted by the U.S. Geological Survey in cooperation with Colorado State University, the objectives were to model streamflow metrics on small, ungaged streams in the Upper Colorado River Basin and identify streams that are potentially threatened with becoming intermittent under drier climate conditions. The Upper Colorado River Basin is a region that is critical for water resources and also projected to experience large future climate shifts toward a drying climate. A random forest modeling approach was used to model the relationship between streamflow metrics and environmental variables. Flow metrics were then projected to ungaged reaches in the Upper Colorado River Basin using environmental variables for each stream, represented as raster cells, in the basin. Last, the projected random forest models of minimum flow coefficient of variation and specific mean daily flow were used to highlight streams that had greater than 61.84 percent minimum flow coefficient of variation and less than 0.096 specific mean daily flow and suggested that these streams will be most threatened to shift to intermittent flow regimes under drier climate conditions. Map projection products can help scientists, land managers, and policymakers understand current hydrology in the Upper Colorado River Basin and make informed decisions regarding water resources. With knowledge of which streams are likely to undergo significant drying in the future, managers and scientists can plan for stream-dependent ecosystems and human water users.

  12. Spectral dependence of backscattering coefficient of mixed phase clouds over West Africa measured with two-wavelength Raman polarization lidar: Features attributed to ice-crystals corner reflection

    NASA Astrophysics Data System (ADS)

    Veselovskii, I.; Goloub, P.; Podvin, T.; Tanre, D.; Ansmann, A.; Korenskiy, M.; Borovoi, A.; Hu, Q.; Whiteman, D. N.

    2017-11-01

    The existing models predict that corner reflection (CR) of laser radiation by simple ice crystals of perfect shape, such as hexagonal columns or plates, can provide a significant contribution to the ice cloud backscattering. However in real clouds the CR effect may be suppressed due to crystal deformation and surface roughness. In contrast to the extinction coefficient, which is spectrally independent, consideration of diffraction associated with CR results in a spectral dependence of the backscattering coefficient. Thus measuring the spectral dependence of the cloud backscattering coefficient, the contribution of CR can be identified. The paper presents the results of profiling of backscattering coefficient (β) and particle depolarization ratio (δ) of ice and mixed-phase clouds over West Africa by means of a two-wavelength polarization Mie-Raman lidar operated at 355 nm and 532 nm during the SHADOW field campaign. The lidar observations were performed at a slant angle of 43 degree off zenith, thus CR from both randomly oriented crystals and oriented plates could be analyzed. For the most of the observations the cloud backscatter color ratio β355/β532 was close to 1.0, and no spectral features that might indicate the presence of CR of randomly oriented crystals were revealed. Still, in two measurement sessions we observed an increase of backscatter color ratio to a value of nearly 1.3 simultaneously with a decrease of the spectral depolarization ratio δ355/δ532 ratio from 1.0 to 0.8 inside the layers containing precipitating ice crystals. We attribute these changes in optical properties to corner reflections by horizontally oriented ice plates.

  13. Random parameter models for accident prediction on two-lane undivided highways in India.

    PubMed

    Dinu, R R; Veeraragavan, A

    2011-02-01

    Generalized linear modeling (GLM), with the assumption of Poisson or negative binomial error structure, has been widely employed in road accident modeling. A number of explanatory variables related to traffic, road geometry, and environment that contribute to accident occurrence have been identified and accident prediction models have been proposed. The accident prediction models reported in literature largely employ the fixed parameter modeling approach, where the magnitude of influence of an explanatory variable is considered to be fixed for any observation in the population. Similar models have been proposed for Indian highways too, which include additional variables representing traffic composition. The mixed traffic on Indian highways comes with a lot of variability within, ranging from difference in vehicle types to variability in driver behavior. This could result in variability in the effect of explanatory variables on accidents across locations. Random parameter models, which can capture some of such variability, are expected to be more appropriate for the Indian situation. The present study is an attempt to employ random parameter modeling for accident prediction on two-lane undivided rural highways in India. Three years of accident history, from nearly 200 km of highway segments, is used to calibrate and validate the models. The results of the analysis suggest that the model coefficients for traffic volume, proportion of cars, motorized two-wheelers and trucks in traffic, and driveway density and horizontal and vertical curvatures are randomly distributed across locations. The paper is concluded with a discussion on modeling results and the limitations of the present study. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Broken Ergodicity in MHD Turbulence in a Spherical Domain

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.; wang, Yifan

    2011-01-01

    Broken ergodicity (BE) occurs in Fourier method numerical simulations of ideal, homogeneous, incompressible magnetohydrodynamic (MHD) turbulence. Although naive statistical theory predicts that Fourier coefficients of fluid velocity and magnetic field are zero-mean random variables, numerical simulations clearly show that low-wave-number coefficients have non-zero mean values that can be very large compared to the associated standard deviation. In other words, large-scale coherent structure (i.e., broken ergodicity) in homogeneous MHD turbulence can spontaneously grow out of random initial conditions. Eigenanalysis of the modal covariance matrices in the probability density functions of ideal statistical theory leads to a theoretical explanation of observed BE in homogeneous MHD turbulence. Since dissipation is minimal at the largest scales, BE is also relevant for resistive magnetofluids, as evidenced in numerical simulations. Here, we move beyond model magnetofluids confined by periodic boxes to examine BE in rotating magnetofluids in spherical domains using spherical harmonic expansions along with suitable boundary conditions. We present theoretical results for 3-D and 2-D spherical models and also present computational results from dynamical simulations of 2-D MHD turbulence on a rotating spherical surface. MHD turbulence on a 2-D sphere is affected by Coriolus forces, while MHD turbulence on a 2-D plane is not, so that 2-D spherical models are a useful (and simpler) intermediate stage on the path to understanding the much more complex 3-D spherical case.

  15. Statistics of biospeckles with application to diagnostics of periodontitis

    NASA Astrophysics Data System (ADS)

    Starukhin, Pavel Y.; Kharish, Natalia A.; Sedykh, Alexey V.; Ulyanov, Sergey S.; Lepilin, Alexander V.; Tuchin, Valery V.

    1999-04-01

    Results of Monte-Carlo simulations Doppler shift are presented for the model of random medium that contain moving particles. The single-layered and two-layered configurations of the medium are considered. Doppler shift of the frequency of laser light is investigated as a function of such parameters as absorption coefficient, scattering coefficient, and thickness of the medium. Possibility of application of speckle interferometry for diagnostics in dentistry has been analyzed. Problem of standardization of the measuring procedure has been studied. Deviation of output characteristics of Doppler system for blood microcirculation measurements has been investigated. Dependence of form of Doppler spectrum on the number of speckles, integration by aperture, has been studied in experiments in vivo.

  16. MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming

    PubMed Central

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530

  17. Eruption patterns of the chilean volcanoes Villarrica, Llaima, and Tupungatito

    NASA Astrophysics Data System (ADS)

    Muñoz, Miguel

    1983-09-01

    The historical eruption records of three Chilean volcanoes have been subjected to many statistical tests, and none have been found to differ significantly from random, or Poissonian, behaviour. The statistical analysis shows rough conformity with the descriptions determined from the eruption rate functions. It is possible that a constant eruption rate describes the activity of Villarrica; Llaima and Tupungatito present complex eruption rate patterns that appear, however, to have no statistical significance. Questions related to loading and extinction processes and to the existence of shallow secondary magma chambers to which magma is supplied from a deeper system are also addressed. The analysis and the computation of the serial correlation coefficients indicate that the three series may be regarded as stationary renewal processes. None of the test statistics indicates rejection of the Poisson hypothesis at a level less than 5%, but the coefficient of variation for the eruption series at Llaima is significantly different from the value expected for a Poisson process. Also, the estimates of the normalized spectrum of the counting process for the three series suggest a departure from the random model, but the deviations are not found to be significant at the 5% level. Kolmogorov-Smirnov and chi-squared test statistics, applied directly to ascertaining to which probability P the random Poisson model fits the data, indicate that there is significant agreement in the case of Villarrica ( P=0.59) and Tupungatito ( P=0.3). Even though the P-value for Llaima is a marginally significant 0.1 (which is equivalent to rejecting the Poisson model at the 90% confidence level), the series suggests that nonrandom features are possibly present in the eruptive activity of this volcano.

  18. The Delicate Analysis of Short-Term Load Forecasting

    NASA Astrophysics Data System (ADS)

    Song, Changwei; Zheng, Yuan

    2017-05-01

    This paper proposes a new method for short-term load forecasting based on the similar day method, correlation coefficient and Fast Fourier Transform (FFT) to achieve the precision analysis of load variation from three aspects (typical day, correlation coefficient, spectral analysis) and three dimensions (time dimension, industry dimensions, the main factors influencing the load characteristic such as national policies, regional economic, holidays, electricity and so on). First, the branch algorithm one-class-SVM is adopted to selection the typical day. Second, correlation coefficient method is used to obtain the direction and strength of the linear relationship between two random variables, which can reflect the influence caused by the customer macro policy and the scale of production to the electricity price. Third, Fourier transform residual error correction model is proposed to reflect the nature of load extracting from the residual error. Finally, simulation result indicates the validity and engineering practicability of the proposed method.

  19. Linearity and sex-specificity of impact force prediction during a fall onto the outstretched hand using a single-damper-model.

    PubMed

    Kawalilak, C E; Lanovaz, J L; Johnston, J D; Kontulainen, S A

    2014-09-01

    To assess the linearity and sex-specificity of damping coefficients used in a single-damper-model (SDM) when predicting impact forces during the worst-case falling scenario from fall heights up to 25 cm. Using 3-dimensional motion tracking and an integrated force plate, impact forces and impact velocities were assessed from 10 young adults (5 males; 5 females), falling from planted knees onto outstretched arms, from a random order of drop heights: 3, 5, 7, 10, 15, 20, and 25 cm. We assessed the linearity and sex-specificity between impact forces and impact velocities across all fall heights using analysis of variance linearity test and linear regression, respectively. Significance was accepted at P<0.05. Association between impact forces and impact velocities up to 25 cm was linear (P=0.02). Damping coefficients appeared sex-specific (males: 627 Ns/m, R(2)=0.70; females: 421 Ns/m; R(2)=0.81; sex combined: 532 Ns/m, R(2)=0.61). A linear damping coefficient used in the SDM proved valid for predicting impact forces from fall heights up to 25 cm. RESULTS suggested the use of sex-specific damping coefficients when estimating impact force using the SDM and calculating the factor-of-risk for wrist fractures.

  20. Anomalous Anderson localization

    NASA Astrophysics Data System (ADS)

    Deng, Wenji

    2000-04-01

    We propose a generalized Anderson model and study numerically the localization phenomena in one dimension. In our model, not all the sites take on-site random site energy. The on-site energy εn on the nth site is assigned as follows. If n+P-1=0 ( mod P) , where P is a positive integer, εn is assumed to be randomly distributed between - W/2 and W/2. On the other lattice sites, the site energy is fixed, say εn=0.The localization length ξ defined as | t| 2=e -2 L/ ξ, where t is the transmission coefficient, is calculated using the transfer matrix method. It is found that the single-electron states with wave vectors k= π/P, 2 π/P,…,(P-1) π/P are no longer localized as in the standard Anderson model. Compared with the smooth localization length spectrum of the Anderson model, there appear P-1 sharp peaks periodically located at P-1 values of wave vector on the localization length spectrum of the generalized Anderson model with parameter P.

  1. Resistance distribution in the hopping percolation model.

    PubMed

    Strelniker, Yakov M; Havlin, Shlomo; Berkovits, Richard; Frydman, Aviad

    2005-07-01

    We study the distribution function P (rho) of the effective resistance rho in two- and three-dimensional random resistor networks of linear size L in the hopping percolation model. In this model each bond has a conductivity taken from an exponential form sigma proportional to exp (-kappar) , where kappa is a measure of disorder and r is a random number, 0< or = r < or =1 . We find that in both the usual strong-disorder regime L/ kappa(nu) >1 (not sensitive to removal of any single bond) and the extreme-disorder regime L/ kappa(nu) <1 (very sensitive to such a removal) the distribution depends only on L/kappa(nu) and can be well approximated by a log-normal function with dispersion b kappa(nu) /L , where b is a coefficient which depends on the type of lattice, and nu is the correlation critical exponent.

  2. "L"-Bivariate and "L"-Multivariate Association Coefficients. Research Report. ETS RR-08-40

    ERIC Educational Resources Information Center

    Kong, Nan; Lewis, Charles

    2008-01-01

    Given a system of multiple random variables, a new measure called the "L"-multivariate association coefficient is defined using (conditional) entropy. Unlike traditional correlation measures, the L-multivariate association coefficient measures the multiassociations or multirelations among the multiple variables in the given system; that…

  3. Constraining the interior density profile of a Jovian planet from precision gravity field data

    NASA Astrophysics Data System (ADS)

    Movshovitz, Naor; Fortney, Jonathan J.; Helled, Ravit; Hubbard, William B.; Thorngren, Daniel; Mankovich, Chris; Wahl, Sean; Militzer, Burkhard; Durante, Daniele

    2017-10-01

    The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properly interpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about the formation mechanism of the planet. Planetary gravity fields are usually described by the coefficients in an expansion of the gravitational potential. Recently, high precision measurements of these coefficients for Jupiter and Saturn have been made by the radio science instruments on the Juno and Cassini spacecraft, respectively.The resulting coefficients come with an associated uncertainty. And while the task of matching a given density profile with a given set of gravity coefficients is relatively straightforward, the question of how best to account for the uncertainty is not. In essentially all prior work on matching models to gravity field data, inferences about planetary structure have rested on imperfect knowledge of the H/He equation of state and on the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet, constrained only by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly constrained by them.We demonstrate this approach with a sample of Jupiter interior models based on recent Juno data and discuss prospects for Saturn.

  4. Entanglement dynamics in random media

    NASA Astrophysics Data System (ADS)

    Menezes, G.; Svaiter, N. F.; Zarro, C. A. D.

    2017-12-01

    We study how the entanglement dynamics between two-level atoms is impacted by random fluctuations of the light cone. In our model the two-atom system is envisaged as an open system coupled with an electromagnetic field in the vacuum state. We employ the quantum master equation in the Born-Markov approximation in order to describe the completely positive time evolution of the atomic system. We restrict our investigations to the situation in which the atoms are coupled individually to two spatially separated cavities, one of which displays the emergence of light-cone fluctuations. In such a disordered cavity, we assume that the coefficients of the Klein-Gordon equation are random functions of the spatial coordinates. The disordered medium is modeled by a centered, stationary, and Gaussian process. We demonstrate that disorder has the effect of slowing down the entanglement decay. We conjecture that in a strong-disorder environment the mean life of entangled states can be enhanced in such a way as to almost completely suppress quantum nonlocal decoherence.

  5. Local Geostatistical Models and Big Data in Hydrological and Ecological Applications

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios

    2015-04-01

    The advent of the big data era creates new opportunities for environmental and ecological modelling but also presents significant challenges. The availability of remote sensing images and low-cost wireless sensor networks implies that spatiotemporal environmental data to cover larger spatial domains at higher spatial and temporal resolution for longer time windows. Handling such voluminous data presents several technical and scientific challenges. In particular, the geostatistical methods used to process spatiotemporal data need to overcome the dimensionality curse associated with the need to store and invert large covariance matrices. There are various mathematical approaches for addressing the dimensionality problem, including change of basis, dimensionality reduction, hierarchical schemes, and local approximations. We present a Stochastic Local Interaction (SLI) model that can be used to model local correlations in spatial data. SLI is a random field model suitable for data on discrete supports (i.e., regular lattices or irregular sampling grids). The degree of localization is determined by means of kernel functions and appropriate bandwidths. The strength of the correlations is determined by means of coefficients. In the "plain vanilla" version the parameter set involves scale and rigidity coefficients as well as a characteristic length. The latter determines in connection with the rigidity coefficient the correlation length of the random field. The SLI model is based on statistical field theory and extends previous research on Spartan spatial random fields [2,3] from continuum spaces to explicitly discrete supports. The SLI kernel functions employ adaptive bandwidths learned from the sampling spatial distribution [1]. The SLI precision matrix is expressed explicitly in terms of the model parameter and the kernel function. Hence, covariance matrix inversion is not necessary for parameter inference that is based on leave-one-out cross validation. This property helps to overcome a significant computational bottleneck of geostatistical models due to the poor scaling of the matrix inversion [4,5]. We present applications to real and simulated data sets, including the Walker lake data, and we investigate the SLI performance using various statistical cross validation measures. References [1] T. Hofmann, B. Schlkopf, A.J. Smola, Annals of Statistics, 36, 1171-1220 (2008). [2] D. T. Hristopulos, SIAM Journal on Scientific Computing, 24(6): 2125-2162 (2003). [3] D. T. Hristopulos and S. N. Elogne, IEEE Transactions on Signal Processing, 57(9): 3475-3487 (2009) [4] G. Jona Lasinio, G. Mastrantonio, and A. Pollice, Statistical Methods and Applications, 22(1):97-112 (2013) [5] Sun, Y., B. Li, and M. G. Genton (2012). Geostatistics for large datasets. In: Advances and Challenges in Space-time Modelling of Natural Events, Lecture Notes in Statistics, pp. 55-77. Springer, Berlin-Heidelberg.

  6. Systematic review using meta-analyses to estimate dose-response relationships between iodine intake and biomarkers of iodine status in different population groups.

    PubMed

    Ristić-Medić, Danijela; Dullemeijer, Carla; Tepsić, Jasna; Petrović-Oggiano, Gordana; Popović, Tamara; Arsić, Aleksandra; Glibetić, Marija; Souverein, Olga W; Collings, Rachel; Cavelaars, Adriënne; de Groot, Lisette; van't Veer, Pieter; Gurinović, Mirjana

    2014-03-01

    The objective of this systematic review was to identify studies investigating iodine intake and biomarkers of iodine status, to assess the data of the selected studies, and to estimate dose-response relationships using meta-analysis. All randomized controlled trials, prospective cohort studies, nested case-control studies, and cross-sectional studies that supplied or measured dietary iodine and measured iodine biomarkers were included. The overall pooled regression coefficient (β) and the standard error of β were calculated by random-effects meta-analysis on a double-log scale, using the calculated intake-status regression coefficient (β) for each individual study. The results of pooled randomized controlled trials indicated that the doubling of dietary iodine intake increased urinary iodine concentrations by 14% in children and adolescents, by 57% in adults and the elderly, and by 81% in pregnant women. The dose-response relationship between iodine intake and biomarkers of iodine status indicated a 12% decrease in thyroid-stimulating hormone and a 31% decrease in thyroglobulin in pregnant women. The model of dose-response quantification used to describe the relationship between iodine intake and biomarkers of iodine status may be useful for providing complementary evidence to support recommendations for iodine intake in different population groups.

  7. Certified Reduced Basis Model Characterization: a Frequentistic Uncertainty Framework

    DTIC Science & Technology

    2011-01-11

    14) It then follows that the Legendre coefficient random vector, (Z [0], Z [1], . . . , Z [I])(ω), is (I+1)– variate normally distributed with mean (δ...I. Note each two-sided inequality represents two constraints. 3. PDE-Based Statistical Inference We now proceed to the parametrized partial...appearance of defects or geometric variations relative to an initial baseline, or perhaps manufacturing departures from nominal specifications; if our

  8. How to Quantify Deterministic and Random Influences on the Statistics of the Foreign Exchange Market

    NASA Astrophysics Data System (ADS)

    Friedrich, R.; Peinke, J.; Renner, Ch.

    2000-05-01

    It is shown that price changes of the U.S. dollar-German mark exchange rates upon different delay times can be regarded as a stochastic Marcovian process. Furthermore, we show how Kramers-Moyal coefficients can be estimated from the empirical data. Finally, we present an explicit Fokker-Planck equation which models very precisely the empirical probability distributions, in particular, their non-Gaussian heavy tails.

  9. Random diffusivity from stochastic equations: comparison of two models for Brownian yet non-Gaussian diffusion

    NASA Astrophysics Data System (ADS)

    Sposini, Vittoria; Chechkin, Aleksei V.; Seno, Flavio; Pagnini, Gianni; Metzler, Ralf

    2018-04-01

    A considerable number of systems have recently been reported in which Brownian yet non-Gaussian dynamics was observed. These are processes characterised by a linear growth in time of the mean squared displacement, yet the probability density function of the particle displacement is distinctly non-Gaussian, and often of exponential (Laplace) shape. This apparently ubiquitous behaviour observed in very different physical systems has been interpreted as resulting from diffusion in inhomogeneous environments and mathematically represented through a variable, stochastic diffusion coefficient. Indeed different models describing a fluctuating diffusivity have been studied. Here we present a new view of the stochastic basis describing time-dependent random diffusivities within a broad spectrum of distributions. Concretely, our study is based on the very generic class of the generalised Gamma distribution. Two models for the particle spreading in such random diffusivity settings are studied. The first belongs to the class of generalised grey Brownian motion while the second follows from the idea of diffusing diffusivities. The two processes exhibit significant characteristics which reproduce experimental results from different biological and physical systems. We promote these two physical models for the description of stochastic particle motion in complex environments.

  10. Optimal sample sizes for the design of reliability studies: power consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.

  11. Genome-wide association study on legendre random regression coefficients for the growth and feed intake trajectory on Duroc Boars.

    PubMed

    Howard, Jeremy T; Jiao, Shihui; Tiezzi, Francesco; Huang, Yijian; Gray, Kent A; Maltecca, Christian

    2015-05-30

    Feed intake and growth are economically important traits in swine production. Previous genome wide association studies (GWAS) have utilized average daily gain or daily feed intake to identify regions that impact growth and feed intake across time. The use of longitudinal models in GWAS studies, such as random regression, allows for SNPs having a heterogeneous effect across the trajectory to be characterized. The objective of this study is therefore to conduct a single step GWAS (ssGWAS) on the animal polynomial coefficients for feed intake and growth. Corrected daily feed intake (DFI Adj) and average daily weight measurements (DBW Avg) on 8981 (n=525,240 observations) and 5643 (n=283,607 observations) animals were utilized in a random regression model using Legendre polynomials (order=2) and a relationship matrix that included genotyped and un-genotyped animals. A ssGWAS was conducted on the animal polynomials coefficients (intercept, linear and quadratic) for animals with genotypes (DFIAdj: n=855; DBWAvg: n=590). Regions were characterized based on the variance of 10-SNP sliding windows GEBV (WGEBV). A bootstrap analysis (n=1000) was conducted to declare significance. Heritability estimates for the traits trajectory ranged from 0.34-0.52 to 0.07-0.23 for DBWAvg and DFIAdj, respectively. Genetic correlations across age classes were large and positive for both DBWAvg and DFIAdj, albeit age classes at the beginning had a small to moderate genetic correlation with age classes towards the end of the trajectory for both traits. The WGEBV variance explained by significant regions (P<0.001) for each polynomial coefficient ranged from 0.2-0.9 to 0.3-1.01% for DBWAvg and DFIAdj, respectively. The WGEBV variance explained by significant regions for the trajectory was 1.54 and 1.95% for DBWAvg and DFIAdj. Both traits identified candidate genes with functions related to metabolite and energy homeostasis, glucose and insulin signaling and behavior. We have identified regions of the genome that have an impact on the intercept, linear and quadratic terms for DBWAvg and DFIAdj. These results provide preliminary evidence that individual growth and feed intake trajectories are impacted by different regions of the genome at different times.

  12. Robustness of networks formed from interdependent correlated networks under intentional attacks

    NASA Astrophysics Data System (ADS)

    Liu, Long; Meng, Ke; Dong, Zhaoyang

    2018-02-01

    We study the problem of intentional attacks targeting to interdependent networks generated with known degree distribution (in-degree oriented model) or distribution of interlinks (out-degree oriented model). In both models, each node's degree is correlated with the number of its links that connect to the other network. For both models, varying the correlation coefficient has a significant effect on the robustness of a system undergoing random attacks or attacks targeting nodes with low degree. For a system with an assortative relationship between in-degree and out-degree, reducing the broadness of networks' degree distributions can increase the resistance of systems against intentional attacks.

  13. Bayesian hierarchical models for cost-effectiveness analyses that use data from cluster randomized trials.

    PubMed

    Grieve, Richard; Nixon, Richard; Thompson, Simon G

    2010-01-01

    Cost-effectiveness analyses (CEA) may be undertaken alongside cluster randomized trials (CRTs) where randomization is at the level of the cluster (for example, the hospital or primary care provider) rather than the individual. Costs (and outcomes) within clusters may be correlated so that the assumption made by standard bivariate regression models, that observations are independent, is incorrect. This study develops a flexible modeling framework to acknowledge the clustering in CEA that use CRTs. The authors extend previous Bayesian bivariate models for CEA of multicenter trials to recognize the specific form of clustering in CRTs. They develop new Bayesian hierarchical models (BHMs) that allow mean costs and outcomes, and also variances, to differ across clusters. They illustrate how each model can be applied using data from a large (1732 cases, 70 primary care providers) CRT evaluating alternative interventions for reducing postnatal depression. The analyses compare cost-effectiveness estimates from BHMs with standard bivariate regression models that ignore the data hierarchy. The BHMs show high levels of cost heterogeneity across clusters (intracluster correlation coefficient, 0.17). Compared with standard regression models, the BHMs yield substantially increased uncertainty surrounding the cost-effectiveness estimates, and altered point estimates. The authors conclude that ignoring clustering can lead to incorrect inferences. The BHMs that they present offer a flexible modeling framework that can be applied more generally to CEA that use CRTs.

  14. Simulation of land use change in the three gorges reservoir area based on CART-CA

    NASA Astrophysics Data System (ADS)

    Yuan, Min

    2018-05-01

    This study proposes a new method to simulate spatiotemporal complex multiple land uses by using classification and regression tree algorithm (CART) based CA model. In this model, we use classification and regression tree algorithm to calculate land class conversion probability, and combine neighborhood factor, random factor to extract cellular transformation rules. The overall Kappa coefficient is 0.8014 and the overall accuracy is 0.8821 in the land dynamic simulation results of the three gorges reservoir area from 2000 to 2010, and the simulation results are satisfactory.

  15. Modeling the variations of reflection coefficient of Earth's lower ionosphere using very low frequency radio wave data by artificial neural network

    NASA Astrophysics Data System (ADS)

    Ghanbari, Keyvan; Khakian Ghomi, Mehdi; Mohammadi, Mohammad; Marbouti, Marjan; Tan, Le Minh

    2016-08-01

    The ionized atmosphere lying from 50 to 600 km above surface, known as ionosphere, contains high amount of electrons and ions. Very Low Frequency (VLF) radio waves with frequencies between 3 and 30 kHz are reflected from the lower ionosphere specifically D-region. A lot of applications in long range communications and navigation systems have been inspired by this characteristic of ionosphere. There are several factors which affect the ionization rate in this region, such as: time of day (presence of sun in the sky), solar zenith angle (seasons) and solar activities. Due to nonlinear response of ionospheric reflection coefficient to these factors, finding an accurate relation between these parameters and reflection coefficient is an arduous task. In order to model these kinds of nonlinear functionalities, some numerical methods are employed. One of these methods is artificial neural network (ANN). In this paper, the VLF radio wave data of 4 sudden ionospheric disturbance (SID) stations are given to a multi-layer perceptron ANN in order to simulate the variations of reflection coefficient of D region ionosphere. After training, validation and testing the ANN, outputs of ANN and observed values are plotted together for 2 random cases of each station. By evaluating the results using 2 parameters of pearson correlation coefficient and root mean square error, a satisfying agreement was found between ANN outputs and real observed data.

  16. Predicting temperate forest stand types using only structural profiles from discrete return airborne lidar

    NASA Astrophysics Data System (ADS)

    Fedrigo, Melissa; Newnham, Glenn J.; Coops, Nicholas C.; Culvenor, Darius S.; Bolton, Douglas K.; Nitschke, Craig R.

    2018-02-01

    Light detection and ranging (lidar) data have been increasingly used for forest classification due to its ability to penetrate the forest canopy and provide detail about the structure of the lower strata. In this study we demonstrate forest classification approaches using airborne lidar data as inputs to random forest and linear unmixing classification algorithms. Our results demonstrated that both random forest and linear unmixing models identified a distribution of rainforest and eucalypt stands that was comparable to existing ecological vegetation class (EVC) maps based primarily on manual interpretation of high resolution aerial imagery. Rainforest stands were also identified in the region that have not previously been identified in the EVC maps. The transition between stand types was better characterised by the random forest modelling approach. In contrast, the linear unmixing model placed greater emphasis on field plots selected as endmembers which may not have captured the variability in stand structure within a single stand type. The random forest model had the highest overall accuracy (84%) and Cohen's kappa coefficient (0.62). However, the classification accuracy was only marginally better than linear unmixing. The random forest model was applied to a region in the Central Highlands of south-eastern Australia to produce maps of stand type probability, including areas of transition (the 'ecotone') between rainforest and eucalypt forest. The resulting map provided a detailed delineation of forest classes, which specifically recognised the coalescing of stand types at the landscape scale. This represents a key step towards mapping the structural and spatial complexity of these ecosystems, which is important for both their management and conservation.

  17. A Graph Theory Practice on Transformed Image: A Random Image Steganography

    PubMed Central

    Thanikaiselvan, V.; Arulmozhivarman, P.; Subashanthini, S.; Amirtharajan, Rengarajan

    2013-01-01

    Modern day information age is enriched with the advanced network communication expertise but unfortunately at the same time encounters infinite security issues when dealing with secret and/or private information. The storage and transmission of the secret information become highly essential and have led to a deluge of research in this field. In this paper, an optimistic effort has been taken to combine graceful graph along with integer wavelet transform (IWT) to implement random image steganography for secure communication. The implementation part begins with the conversion of cover image into wavelet coefficients through IWT and is followed by embedding secret image in the randomly selected coefficients through graph theory. Finally stegoimage is obtained by applying inverse IWT. This method provides a maximum of 44 dB peak signal to noise ratio (PSNR) for 266646 bits. Thus, the proposed method gives high imperceptibility through high PSNR value and high embedding capacity in the cover image due to adaptive embedding scheme and high robustness against blind attack through graph theoretic random selection of coefficients. PMID:24453857

  18. Biases and Standard Errors of Standardized Regression Coefficients

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Chan, Wai

    2011-01-01

    The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample…

  19. Random Regression Models Using Legendre Polynomials to Estimate Genetic Parameters for Test-day Milk Protein Yields in Iranian Holstein Dairy Cattle.

    PubMed

    Naserkheil, Masoumeh; Miraie-Ashtiani, Seyed Reza; Nejati-Javaremi, Ardeshir; Son, Jihyun; Lee, Deukhwan

    2016-12-01

    The objective of this study was to estimate the genetic parameters of milk protein yields in Iranian Holstein dairy cattle. A total of 1,112,082 test-day milk protein yield records of 167,269 first lactation Holstein cows, calved from 1990 to 2010, were analyzed. Estimates of the variance components, heritability, and genetic correlations for milk protein yields were obtained using a random regression test-day model. Milking times, herd, age of recording, year, and month of recording were included as fixed effects in the model. Additive genetic and permanent environmental random effects for the lactation curve were taken into account by applying orthogonal Legendre polynomials of the fourth order in the model. The lowest and highest additive genetic variances were estimated at the beginning and end of lactation, respectively. Permanent environmental variance was higher at both extremes. Residual variance was lowest at the middle of the lactation and contrarily, heritability increased during this period. Maximum heritability was found during the 12th lactation stage (0.213±0.007). Genetic, permanent, and phenotypic correlations among test-days decreased as the interval between consecutive test-days increased. A relatively large data set was used in this study; therefore, the estimated (co)variance components for random regression coefficients could be used for national genetic evaluation of dairy cattle in Iran.

  20. Random Regression Models Using Legendre Polynomials to Estimate Genetic Parameters for Test-day Milk Protein Yields in Iranian Holstein Dairy Cattle

    PubMed Central

    Naserkheil, Masoumeh; Miraie-Ashtiani, Seyed Reza; Nejati-Javaremi, Ardeshir; Son, Jihyun; Lee, Deukhwan

    2016-01-01

    The objective of this study was to estimate the genetic parameters of milk protein yields in Iranian Holstein dairy cattle. A total of 1,112,082 test-day milk protein yield records of 167,269 first lactation Holstein cows, calved from 1990 to 2010, were analyzed. Estimates of the variance components, heritability, and genetic correlations for milk protein yields were obtained using a random regression test-day model. Milking times, herd, age of recording, year, and month of recording were included as fixed effects in the model. Additive genetic and permanent environmental random effects for the lactation curve were taken into account by applying orthogonal Legendre polynomials of the fourth order in the model. The lowest and highest additive genetic variances were estimated at the beginning and end of lactation, respectively. Permanent environmental variance was higher at both extremes. Residual variance was lowest at the middle of the lactation and contrarily, heritability increased during this period. Maximum heritability was found during the 12th lactation stage (0.213±0.007). Genetic, permanent, and phenotypic correlations among test-days decreased as the interval between consecutive test-days increased. A relatively large data set was used in this study; therefore, the estimated (co)variance components for random regression coefficients could be used for national genetic evaluation of dairy cattle in Iran. PMID:26954192

  1. Quasiperiodic Quantum Ising Transitions in 1D

    NASA Astrophysics Data System (ADS)

    Crowley, P. J. D.; Chandran, A.; Laumann, C. R.

    2018-04-01

    Unlike random potentials, quasiperiodic modulation can induce localization-delocalization transitions in one dimension. In this Letter, we analyze the implications of this for symmetry breaking in the quasiperiodically modulated quantum Ising chain. Although weak modulation is irrelevant, strong modulation induces new ferromagnetic and paramagnetic phases which are fully localized and gapless. The quasiperiodic potential and localized excitations lead to quantum criticality that is intermediate to that of the clean and randomly disordered models with exponents of ν =1+ (exact) and z ≈1.9 , Δσ≈0.16 , and Δγ≈0.63 (up to logarithmic corrections). Technically, the clean Ising transition is destabilized by logarithmic wandering of the local reduced couplings. We conjecture that the wandering coefficient w controls the universality class of the quasiperiodic transition and show its stability to smooth perturbations that preserve the quasiperiodic structure of the model.

  2. Reconfiguration and Search of Social Networks

    PubMed Central

    Zhang, Lianming; Peng, Aoyuan

    2013-01-01

    Social networks tend to exhibit some topological characteristics different from regular networks and random networks, such as shorter average path length and higher clustering coefficient, and the node degree of the majority of social networks obeys exponential distribution. Based on the topological characteristics of the real social networks, a new network model which suits to portray the structure of social networks was proposed, and the characteristic parameters of the model were calculated. To find out the relationship between two people in the social network, and using the local information of the social network and the parallel mechanism, a hybrid search strategy based on k-walker random and a high degree was proposed. Simulation results show that the strategy can significantly reduce the average number of search steps, so as to effectively improve the search speed and efficiency. PMID:24574861

  3. Spectra of random networks in the weak clustering regime

    NASA Astrophysics Data System (ADS)

    Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen; Rodrigues, Francisco A.

    2018-03-01

    The asymptotic behavior of dynamical processes in networks can be expressed as a function of spectral properties of the corresponding adjacency and Laplacian matrices. Although many theoretical results are known for the spectra of traditional configuration models, networks generated through these models fail to describe many topological features of real-world networks, in particular non-null values of the clustering coefficient. Here we study effects of cycles of order three (triangles) in network spectra. By using recent advances in random matrix theory, we determine the spectral distribution of the network adjacency matrix as a function of the average number of triangles attached to each node for networks without modular structure and degree-degree correlations. Implications to network dynamics are discussed. Our findings can shed light in the study of how particular kinds of subgraphs influence network dynamics.

  4. Charged Particle Diffusion in Isotropic Random Static Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Subedi, P.; Sonsrettee, W.; Matthaeus, W. H.; Ruffolo, D. J.; Wan, M.; Montgomery, D.

    2013-12-01

    Study of the transport and diffusion of charged particles in a turbulent magnetic field remains a subject of considerable interest. Research has most frequently concentrated on determining the diffusion coefficient in the presence of a mean magnetic field. Here we consider Diffusion of charged particles in fully three dimensional statistically isotropic magnetic field turbulence with no mean field which is pertinent to many astrophysical situations. We classify different regions of particle energy depending upon the ratio of Larmor radius of the charged particle to the characteristic outer length scale of turbulence. We propose three different theoretical models to calculate the diffusion coefficient each applicable to a distinct range of particle energies. The theoretical results are compared with those from computer simulations, showing very good agreement.

  5. Quantum thermostatted disordered systems and sensitivity under compression

    NASA Astrophysics Data System (ADS)

    Vanzan, Tommaso; Rondoni, Lamberto

    2018-03-01

    A one-dimensional quantum system with off diagonal disorder, consisting of a sample of conducting regions randomly interspersed within potential barriers is considered. Results mainly concerning the large N limit are presented. In particular, the effect of compression on the transmission coefficient is investigated. A numerical method to simulate such a system, for a physically relevant number of barriers, is proposed. It is shown that the disordered model converges to the periodic case as N increases, with a rate of convergence which depends on the disorder degree. Compression always leads to a decrease of the transmission coefficient which may be exploited to design nano-technological sensors. Effective choices for the physical parameters to improve the sensitivity are provided. Eventually large fluctuations and rate functions are analysed.

  6. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  7. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  8. Fluctuating Navier-Stokes equations for inelastic hard spheres or disks.

    PubMed

    Brey, J Javier; Maynar, P; de Soria, M I García

    2011-04-01

    Starting from the fluctuating Boltzmann equation for smooth inelastic hard spheres or disks, closed equations for the fluctuating hydrodynamic fields to Navier-Stokes order are derived. This requires deriving constitutive relations for both the fluctuating fluxes and the correlations of the random forces. The former are identified as having the same form as the macroscopic average fluxes and involving the same transport coefficients. On the other hand, the random force terms exhibit two peculiarities as compared with their elastic limit for molecular systems. First, they are not white but have some finite relaxation time. Second, their amplitude is not determined by the macroscopic transport coefficients but involves new coefficients. ©2011 American Physical Society

  9. An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei

    2006-01-01

    The objective of this paper is to define empirical parameters for an initial thermal non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two codes currently used at Glenn Research Center for Stirling modeling are Fluent and CFD-ACE. The codes porous-media models are equilibrium models, which assume solid matrix and fluid are in thermal equilibrium. This is believed to be a poor assumption for Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, use non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location and time during the cycle. Experimentally based information was used to define: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity was also estimated. Determination of model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Converter (TDC), which uses a random-fiber regenerator matrix. Emphasis is on use of available data to define empirical parameters needed in a thermal non-equilibrium porous media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates.

  10. The Analysis of Completely Randomized Factorial Experiments When Observations Are Lost at Random.

    ERIC Educational Resources Information Center

    Hummel, Thomas J.

    An investigation was conducted of the characteristics of two estimation procedures and corresponding test statistics used in the analysis of completely randomized factorial experiments when observations are lost at random. For one estimator, contrast coefficients for cell means did not involve the cell frequencies. For the other, contrast…

  11. Parallel capillary-tube-based extension of thermoacoustic theory for random porous media.

    PubMed

    Roh, Heui-Seol; Raspet, Richard; Bass, Henry E

    2007-03-01

    Thermoacoustic theory is extended to stacks made of random bulk media. Characteristics of the porous stack such as the tortuosity and dynamic shape factors are introduced into the thermoacoustic wave equation in the low reduced frequency approximation. Basic thermoacoustic equations for a bulk porous medium are formulated analogously to the equations for a single pore. Use of different dynamic shape factors for the viscous and thermal effects is adopted and scaling using the dynamic shape factors and tortuosity is demonstrated. Comparisons of the calculated and experimentally derived thermoacoustic properties of reticulated vitreous carbon and aluminum foam show good agreement. A consistent mathematical model of sound propagation in a random porous medium with an imposed temperature is developed. This treatment leads to an expression for the coefficient of the temperature gradient in terms of scaled cylindrical thermoviscous functions.

  12. Predicting longitudinal trajectories of health probabilities with random-effects multinomial logit regression.

    PubMed

    Liu, Xian; Engel, Charles C

    2012-12-20

    Researchers often encounter longitudinal health data characterized with three or more ordinal or nominal categories. Random-effects multinomial logit models are generally applied to account for potential lack of independence inherent in such clustered data. When parameter estimates are used to describe longitudinal processes, however, random effects, both between and within individuals, need to be retransformed for correctly predicting outcome probabilities. This study attempts to go beyond existing work by developing a retransformation method that derives longitudinal growth trajectories of unbiased health probabilities. We estimated variances of the predicted probabilities by using the delta method. Additionally, we transformed the covariates' regression coefficients on the multinomial logit function, not substantively meaningful, to the conditional effects on the predicted probabilities. The empirical illustration uses the longitudinal data from the Asset and Health Dynamics among the Oldest Old. Our analysis compared three sets of the predicted probabilities of three health states at six time points, obtained from, respectively, the retransformation method, the best linear unbiased prediction, and the fixed-effects approach. The results demonstrate that neglect of retransforming random errors in the random-effects multinomial logit model results in severely biased longitudinal trajectories of health probabilities as well as overestimated effects of covariates on the probabilities. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Persistent random walk of cells involving anomalous effects and random death

    NASA Astrophysics Data System (ADS)

    Fedotov, Sergei; Tan, Abby; Zubarev, Andrey

    2015-04-01

    The purpose of this paper is to implement a random death process into a persistent random walk model which produces sub-ballistic superdiffusion (Lévy walk). We develop a stochastic two-velocity jump model of cell motility for which the switching rate depends upon the time which the cell has spent moving in one direction. It is assumed that the switching rate is a decreasing function of residence (running) time. This assumption leads to the power law for the velocity switching time distribution. This describes the anomalous persistence of cell motility: the longer the cell moves in one direction, the smaller the switching probability to another direction becomes. We derive master equations for the cell densities with the generalized switching terms involving the tempered fractional material derivatives. We show that the random death of cells has an important implication for the transport process through tempering of the superdiffusive process. In the long-time limit we write stationary master equations in terms of exponentially truncated fractional derivatives in which the rate of death plays the role of tempering of a Lévy jump distribution. We find the upper and lower bounds for the stationary profiles corresponding to the ballistic transport and diffusion with the death-rate-dependent diffusion coefficient. Monte Carlo simulations confirm these bounds.

  14. Random matrix approach to the dynamics of stock inventory variations

    NASA Astrophysics Data System (ADS)

    Zhou, Wei-Xing; Mu, Guo-Hua; Kertész, János

    2012-09-01

    It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient Cij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ1 and λ2) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ1) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients CV R between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets.

  15. Cognition, academic achievement, and epilepsy in school-age children: a case-control study in a developing country.

    PubMed

    Melbourne Chambers, R; Morrison-Levy, N; Chang, S; Tapper, J; Walker, S; Tulloch-Reid, M

    2014-04-01

    We conducted a case-control study of 33 Jamaican children 7 to 12years old with uncomplicated epilepsy and 33 of their classroom peers matched for age and gender to determine whether epilepsy resulted in differences in cognitive ability and school achievement and if socioeconomic status or the environment had a moderating effect on any differences. Intelligence, language, memory, attention, executive function, and mathematics ability were assessed using selected tests from NEPSY, WISCR, TeaCh, WRAT3 - expanded, and Raven's Coloured Progressive Matrices. The child's environment at home was measured using the Middle Childhood HOME inventory. Socioeconomic status was determined from a combination of household, crowding, possessions, and sanitation. We compared the characteristics of the cases and controls and used random effects regression models (using the matched pair as the cluster) to examine the relationship between cognition and epilepsy. We found that there was no significant difference in IQ, but children with epilepsy had lower scores on tests of memory (p<0.05), language (p<0.05), and attention (p<0.01) compared with their controls. In random effects models, epilepsy status had a significant effect on memory (coefficient=-0.14, CI: -0.23, -0.05), language (coefficient=-0.13, CI: -0.23, -0.04), and mathematics ability (coefficient=-0.01, CI: -0.02, -0.00). Adjustment for the home environment and socioeconomic status and inclusion of interaction terms for these variables did not alter these effects. In conclusion, we found that epilepsy status in Jamaican children has a significant effect on performance on tests of memory, language, and mathematics and that this effect is not modified or explained by socioeconomic status or the child's home environment. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Kidney function changes with aging in adults: comparison between cross-sectional and longitudinal data analyses in renal function assessment.

    PubMed

    Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas

    2015-12-01

    The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.

  17. How People Use Social Information to Find out What to Want in the Paradigmatic Case of Inter-temporal Preferences

    PubMed Central

    Dolan, Raymond J.

    2016-01-01

    The weight with which a specific outcome feature contributes to preference quantifies a person’s ‘taste’ for that feature. However, far from being fixed personality characteristics, tastes are plastic. They tend to align, for example, with those of others even if such conformity is not rewarded. We hypothesised that people can be uncertain about their tastes. Personal tastes are therefore uncertain beliefs. People can thus learn about them by considering evidence, such as the preferences of relevant others, and then performing Bayesian updating. If a person’s choice variability reflects uncertainty, as in random-preference models, then a signature of Bayesian updating is that the degree of taste change should correlate with that person’s choice variability. Temporal discounting coefficients are an important example of taste–for patience. These coefficients quantify impulsivity, have good psychometric properties and can change upon observing others’ choices. We examined discounting preferences in a novel, large community study of 14–24 year olds. We assessed discounting behaviour, including decision variability, before and after participants observed another person’s choices. We found good evidence for taste uncertainty and for Bayesian taste updating. First, participants displayed decision variability which was better accounted for by a random-taste than by a response-noise model. Second, apparent taste shifts were well described by a Bayesian model taking into account taste uncertainty and the relevance of social information. Our findings have important neuroscientific, clinical and developmental significance. PMID:27447491

  18. Gossip in Random Networks

    NASA Astrophysics Data System (ADS)

    Malarz, K.; Szvetelszky, Z.; Szekf, B.; Kulakowski, K.

    2006-11-01

    We consider the average probability X of being informed on a gossip in a given social network. The network is modeled within the random graph theory of Erd{õ}s and Rényi. In this theory, a network is characterized by two parameters: the size N and the link probability p. Our experimental data suggest three levels of social inclusion of friendship. The critical value pc, for which half of agents are informed, scales with the system size as N-gamma with gamma approx 0.68. Computer simulations show that the probability X varies with p as a sigmoidal curve. Influence of the correlations between neighbors is also evaluated: with increasing clustering coefficient C, X decreases.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less

  20. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    USGS Publications Warehouse

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  1. Multiscale measurement error models for aggregated small area health data.

    PubMed

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.

  2. Geomorphic versus land use controls on suspended sediment rating curves

    NASA Astrophysics Data System (ADS)

    Belmont, P.; Vaughan, A. A.; Fisher, A. C. N.

    2017-12-01

    The relation between river discharge (Q) and suspended sediment (SS) concentration reflects the degree to which sediment sources are accessed or depleted across the range of flow conditions. Increased availability of high resolution topography and land use data greatly enhance our ability to evaluate linkages between characteristics of these sediment rating curves (SRCs) and the geomorphic features that influence them. We evaluated Q-SS relations at 45 gages throughout Minnesota, USA representing a wide variety of landscape settings in terms of topography, land use, and geologic history. We characterized the SRCs according to the overall shape, steepness (exponent), vertical offset (coefficient) and SS concentration under low flow (90% exceedance) conditions. Rivers exhibited three distinct SRC shapes, simple power functions, threshold power functions and peaked power functions. We used random forest models to analyze relations between SRC parameters and attributes of the watershed as well as the near-channel environment. The model correctly classified 78% of SRC shapes and explained 60% of variance in the SRC exponent, 43% of the SRC coefficient for rising limb samples, and 45% of variance under low flow conditions. Notably, the random forest models predict that near-channel morphology predominately controls both the shape and steepness of the sediment rating curves. Land use predominately controls the vertical offset (coefficient) and SS concentration under low flow conditions. These findings suggest that land use and watershed restoration practices may have little capacity to alter the shape and steepness of these curves as these characteristics may be dictated by the geologic and geomorphic setting. Rather, human influences in the watershed may exhibit the greatest influence on suspended sediment concentrations at moderate to low flows. Criteria to evaluate improvements in water quality as a result of changes in land management might be most meaningful if they target these moderate to low flow conditions.

  3. Job strain and resting heart rate: a cross-sectional study in a Swedish random working sample.

    PubMed

    Eriksson, Peter; Schiöler, Linus; Söderberg, Mia; Rosengren, Annika; Torén, Kjell

    2016-03-05

    Numerous studies have reported an association between stressing work conditions and cardiovascular disease. However, more evidence is needed, and the etiological mechanisms are unknown. Elevated resting heart rate has emerged as a possible risk factor for cardiovascular disease, but little is known about the relation to work-related stress. This study therefore investigated the association between job strain, job control, and job demands and resting heart rate. We conducted a cross-sectional survey of randomly selected men and women in Västra Götalandsregionen, Sweden (West county of Sweden) (n = 1552). Information about job strain, job demands, job control, heart rate and covariates was collected during the period 2001-2004 as part of the INTERGENE/ADONIX research project. Six different linear regression models were used with adjustments for gender, age, BMI, smoking, education, and physical activity in the fully adjusted model. Job strain was operationalized as the log-transformed ratio of job demands over job control in the statistical analyses. No associations were seen between resting heart rate and job demands. Job strain was associated with elevated resting heart rate in the unadjusted model (linear regression coefficient 1.26, 95 % CI 0.14 to 2.38), but not in any of the extended models. Low job control was associated with elevated resting heart rate after adjustments for gender, age, BMI, and smoking (linear regression coefficient -0.18, 95 % CI -0.30 to -0.02). However, there were no significant associations in the fully adjusted model. Low job control and job strain, but not job demands, were associated with elevated resting heart rate. However, the observed associations were modest and may be explained by confounding effects.

  4. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  5. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    PubMed

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.

  6. A Bayesian ridge regression analysis of congestion's impact on urban expressway safety.

    PubMed

    Shi, Qi; Abdel-Aty, Mohamed; Lee, Jaeyoung

    2016-03-01

    With the rapid growth of traffic in urban areas, concerns about congestion and traffic safety have been heightened. This study leveraged both Automatic Vehicle Identification (AVI) system and Microwave Vehicle Detection System (MVDS) installed on an expressway in Central Florida to explore how congestion impacts the crash occurrence in urban areas. Multiple congestion measures from the two systems were developed. To ensure more precise estimates of the congestion's effects, the traffic data were aggregated into peak and non-peak hours. Multicollinearity among traffic parameters was examined. The results showed the presence of multicollinearity especially during peak hours. As a response, ridge regression was introduced to cope with this issue. Poisson models with uncorrelated random effects, correlated random effects, and both correlated random effects and random parameters were constructed within the Bayesian framework. It was proven that correlated random effects could significantly enhance model performance. The random parameters model has similar goodness-of-fit compared with the model with only correlated random effects. However, by accounting for the unobserved heterogeneity, more variables were found to be significantly related to crash frequency. The models indicated that congestion increased crash frequency during peak hours while during non-peak hours it was not a major crash contributing factor. Using the random parameter model, the three congestion measures were compared. It was found that all congestion indicators had similar effects while Congestion Index (CI) derived from MVDS data was a better congestion indicator for safety analysis. Also, analyses showed that the segments with higher congestion intensity could not only increase property damage only (PDO) crashes, but also more severe crashes. In addition, the issues regarding the necessity to incorporate specific congestion indicator for congestion's effects on safety and to take care of the multicollinearity between explanatory variables were also discussed. By including a specific congestion indicator, the model performance significantly improved. When comparing models with and without ridge regression, the magnitude of the coefficients was altered in the existence of multicollinearity. These conclusions suggest that the use of appropriate congestion measure and consideration of multicolilnearity among the variables would improve the models and our understanding about the effects of congestion on traffic safety. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Impact of Health Research Systems on Under-5 Mortality Rate: A Trend Analysis.

    PubMed

    Yazdizadeh, Bahareh; Parsaeian, Mahboubeh; Majdzadeh, Reza; Nikooee, Sima

    2016-11-26

    Between 1990 and 2015, under-5 mortality rate (U5MR) declined by 53%, from an estimated rate of 91 deaths per 1000 live births to 43, globally. The aim of this study was to determine the share of health research systems in this decrease alongside other influential factors. We used random effect regression models including the 'random intercept' and 'random intercept and random slope' models to analyze the panel data from 1990 to 2010. We selected the countries with U5MRs falling between the first and third quartiles in 1990. We used both the total articles (TA) and the number of child-specific articles (CSA) as a proxy of the health research system. In order to account for the impact of other factors, measles vaccination coverage (MVC) (as a proxy of health system performance), gross domestic product (GDP), human development index (HDI), and corruption perception index (CPI) (as proxies of development), were embedded in the model. Among all the models, 'the random intercept and random slope models' had lower residuals. The same variables of CSA, HDI, and time were significant and the coefficient of CSA was estimated at -0.17; meaning, with the addition of every 100 CSA, the rate of U5MR decreased by 17 per 1000 live births. Although the number of CSA has contributed to the reduction of U5MR, the amount of its contribution is negligible compared to the countries' development. We recommend entering different types of researches into the model separately in future research and including the variable of 'exchange between knowledge generator and user.' © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  8. A comparison of two indices for the intraclass correlation coefficient.

    PubMed

    Shieh, Gwowen

    2012-12-01

    In the present study, we examined the behavior of two indices for measuring the intraclass correlation in the one-way random effects model: the prevailing ICC(1) (Fisher, 1938) and the corrected eta-squared (Bliese & Halverson, 1998). These two procedures differ both in their methods of estimating the variance components that define the intraclass correlation coefficient and in their performance of bias and mean squared error in the estimation of the intraclass correlation coefficient. In contrast with the natural unbiased principle used to construct ICC(1), in the present study it was analytically shown that the corrected eta-squared estimator is identical to the maximum likelihood estimator and the pairwise estimator under equal group sizes. Moreover, the empirical results obtained from the present Monte Carlo simulation study across various group structures revealed the mutual dominance relationship between their truncated versions for negative values. The corrected eta-squared estimator performs better than the ICC(1) estimator when the underlying population intraclass correlation coefficient is small. Conversely, ICC(1) has a clear advantage over the corrected eta-squared for medium and large magnitudes of population intraclass correlation coefficient. The conceptual description and numerical investigation provide guidelines to help researchers choose between the two indices for more accurate reliability analysis in multilevel research.

  9. Electronic and transport properties of BCN alloy nanoribbons

    NASA Astrophysics Data System (ADS)

    Darvishi Gilan, Mahdi; Chegel, Raad

    2018-03-01

    The dependence of the carbon (C) concentration on the electronic and transport properties of boron carbonitride (BCN) alloy nanoribbons have been investigated using surface Green's functions technique and random Hamiltonian model by considering random hopping parameters including first and second nearest neighbors. Our calculations indicate that substituting boron (nitrogen) sites with carbon atoms induces a new band close to conduction (valence) band and carbon atoms behave like a donor (acceptor) dopants. Also, while both nitrogen and boron sites are substituted randomly by carbon atoms, new bands are induced close to both valence and conduction bands. The band gap decreases with C substituting and the number of charge carriers increases in low bias voltage. Far from Fermi level in the higher range of energy, transmission coefficient and current of the system are reduced by increasing the C concentration. Based on our results, tuning the electronic and transport properties of BCN alloy nanoribbons by random carbon dopants could be applicable to design nanoelectronics devices.

  10. Comparison of Experimental Methods for Estimating Matrix Diffusion Coefficients for Contaminant Transport Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Telfeyan, Katherine Christina; Ware, Stuart Douglas; Reimus, Paul William

    Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%,more » and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.« less

  11. Comparison of experimental methods for estimating matrix diffusion coefficients for contaminant transport modeling

    NASA Astrophysics Data System (ADS)

    Telfeyan, Katherine; Ware, S. Doug; Reimus, Paul W.; Birdsell, Kay H.

    2018-02-01

    Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.

  12. ToxiM: A Toxicity Prediction Tool for Small Molecules Developed Using Machine Learning and Chemoinformatics Approaches.

    PubMed

    Sharma, Ashok K; Srivastava, Gopal N; Roy, Ankita; Sharma, Vineet K

    2017-01-01

    The experimental methods for the prediction of molecular toxicity are tedious and time-consuming tasks. Thus, the computational approaches could be used to develop alternative methods for toxicity prediction. We have developed a tool for the prediction of molecular toxicity along with the aqueous solubility and permeability of any molecule/metabolite. Using a comprehensive and curated set of toxin molecules as a training set, the different chemical and structural based features such as descriptors and fingerprints were exploited for feature selection, optimization and development of machine learning based classification and regression models. The compositional differences in the distribution of atoms were apparent between toxins and non-toxins, and hence, the molecular features were used for the classification and regression. On 10-fold cross-validation, the descriptor-based, fingerprint-based and hybrid-based classification models showed similar accuracy (93%) and Matthews's correlation coefficient (0.84). The performances of all the three models were comparable (Matthews's correlation coefficient = 0.84-0.87) on the blind dataset. In addition, the regression-based models using descriptors as input features were also compared and evaluated on the blind dataset. Random forest based regression model for the prediction of solubility performed better ( R 2 = 0.84) than the multi-linear regression (MLR) and partial least square regression (PLSR) models, whereas, the partial least squares based regression model for the prediction of permeability (caco-2) performed better ( R 2 = 0.68) in comparison to the random forest and MLR based regression models. The performance of final classification and regression models was evaluated using the two validation datasets including the known toxins and commonly used constituents of health products, which attests to its accuracy. The ToxiM web server would be a highly useful and reliable tool for the prediction of toxicity, solubility, and permeability of small molecules.

  13. ToxiM: A Toxicity Prediction Tool for Small Molecules Developed Using Machine Learning and Chemoinformatics Approaches

    PubMed Central

    Sharma, Ashok K.; Srivastava, Gopal N.; Roy, Ankita; Sharma, Vineet K.

    2017-01-01

    The experimental methods for the prediction of molecular toxicity are tedious and time-consuming tasks. Thus, the computational approaches could be used to develop alternative methods for toxicity prediction. We have developed a tool for the prediction of molecular toxicity along with the aqueous solubility and permeability of any molecule/metabolite. Using a comprehensive and curated set of toxin molecules as a training set, the different chemical and structural based features such as descriptors and fingerprints were exploited for feature selection, optimization and development of machine learning based classification and regression models. The compositional differences in the distribution of atoms were apparent between toxins and non-toxins, and hence, the molecular features were used for the classification and regression. On 10-fold cross-validation, the descriptor-based, fingerprint-based and hybrid-based classification models showed similar accuracy (93%) and Matthews's correlation coefficient (0.84). The performances of all the three models were comparable (Matthews's correlation coefficient = 0.84–0.87) on the blind dataset. In addition, the regression-based models using descriptors as input features were also compared and evaluated on the blind dataset. Random forest based regression model for the prediction of solubility performed better (R2 = 0.84) than the multi-linear regression (MLR) and partial least square regression (PLSR) models, whereas, the partial least squares based regression model for the prediction of permeability (caco-2) performed better (R2 = 0.68) in comparison to the random forest and MLR based regression models. The performance of final classification and regression models was evaluated using the two validation datasets including the known toxins and commonly used constituents of health products, which attests to its accuracy. The ToxiM web server would be a highly useful and reliable tool for the prediction of toxicity, solubility, and permeability of small molecules. PMID:29249969

  14. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    USGS Publications Warehouse

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a Bayesian hierarchical discrete-choice model for resource selection can provide managers with 2 components of population-level inference: average population selection and variability of selection. Both components are necessary to make sound management decisions based on animal selection.

  15. Dynamic responses of railroad car models to vertical and lateral rail inputs

    NASA Technical Reports Server (NTRS)

    Sewall, J. L.; Parrish, R. V.; Durling, B. J.

    1971-01-01

    Simplified dynamic models were applied in a study of vibration in a high-speed railroad car. The mathematical models used were a four-degree-of-freedom model for vertical responses to vertical rail inputs and a ten-degree-of-freedom model for lateral response to lateral or rolling (cross-level) inputs from the rails. Elastic properties of the passenger car body were represented by bending and torsion of a uniform beam. Rail-to-car (truck) suspensions were modeled as spring-mass-dashpot oscillators. Lateral spring nonlinearities approximating certain complicated truck mechanisms were introduced. The models were excited by displacement and, in some cases, velocity inputs from the rails by both deterministic (including sinusoidal) and random input functions. Results were obtained both in the frequency and time domains. Solutions in the time domain for the lateral model were obtained for a wide variety of transient and random inputs generated on-line by an analog computer. Variations in one of the damping properties of the lateral car suspension gave large fluctuations in response over a range of car speeds for a given input. This damping coefficient was significant in reducing lateral car responses that were higher for nonlinear springs for three different inputs.

  16. Integrating Geo-Spatial Data for Regional Landslide Susceptibility Modeling in Consideration of Run-Out Signature

    NASA Astrophysics Data System (ADS)

    Lai, J.-S.; Tsai, F.; Chiang, S.-H.

    2016-06-01

    This study implements a data mining-based algorithm, the random forests classifier, with geo-spatial data to construct a regional and rainfall-induced landslide susceptibility model. The developed model also takes account of landslide regions (source, non-occurrence and run-out signatures) from the original landslide inventory in order to increase the reliability of the susceptibility modelling. A total of ten causative factors were collected and used in this study, including aspect, curvature, elevation, slope, faults, geology, NDVI (Normalized Difference Vegetation Index), rivers, roads and soil data. Consequently, this study transforms the landslide inventory and vector-based causative factors into the pixel-based format in order to overlay with other raster data for constructing the random forests based model. This study also uses original and edited topographic data in the analysis to understand their impacts to the susceptibility modeling. Experimental results demonstrate that after identifying the run-out signatures, the overall accuracy and Kappa coefficient have been reached to be become more than 85 % and 0.8, respectively. In addition, correcting unreasonable topographic feature of the digital terrain model also produces more reliable modelling results.

  17. Transform-Based Wideband Array Processing

    DTIC Science & Technology

    1992-01-31

    Breusch and Pagan [2], it is possible to test which model, M,€, 0 AR or random coefficient, will better fit typical array data. Li The test indicates that...bearing estimation problems," Proc. IEEE, vol. 70, no. 9, pp. 1018-1028, 1982. (2] T. S. Breusch and A. R. Pagan , "A simple test for het...cor- relations do not obey an AR relationship across the array; relations in the observations. Through the use of a binary hypothesis test , it is

  18. Global industrial impact coefficient based on random walk process and inter-country input-output table

    NASA Astrophysics Data System (ADS)

    Xing, Lizhi; Dong, Xianlei; Guan, Jun

    2017-04-01

    Input-output table is very comprehensive and detailed in describing the national economic system with lots of economic relationships, which contains supply and demand information among industrial sectors. The complex network, a theory and method for measuring the structure of complex system, can describe the structural characteristics of the internal structure of the research object by measuring the structural indicators of the social and economic system, revealing the complex relationship between the inner hierarchy and the external economic function. This paper builds up GIVCN-WIOT models based on World Input-Output Database in order to depict the topological structure of Global Value Chain (GVC), and assumes the competitive advantage of nations is equal to the overall performance of its domestic sectors' impact on the GVC. Under the perspective of econophysics, Global Industrial Impact Coefficient (GIIC) is proposed to measure the national competitiveness in gaining information superiority and intermediate interests. Analysis of GIVCN-WIOT models yields several insights including the following: (1) sectors with higher Random Walk Centrality contribute more to transmitting value streams within the global economic system; (2) Half-Value Ratio can be used to measure robustness of open-economy macroeconomics in the process of globalization; (3) the positive correlation between GIIC and GDP indicates that one country's global industrial impact could reveal its international competitive advantage.

  19. Role of protein fluctuation correlations in electron transfer in photosynthetic complexes.

    PubMed

    Nesterov, Alexander I; Berman, Gennady P

    2015-04-01

    We consider the dependence of the electron transfer in photosynthetic complexes on correlation properties of random fluctuations of the protein environment. The electron subsystem is modeled by a finite network of connected electron (exciton) sites. The fluctuations of the protein environment are modeled by random telegraph processes, which act either collectively (correlated) or independently (uncorrelated) on the electron sites. We derived an exact closed system of first-order linear differential equations with constant coefficients, for the average density matrix elements and for their first moments. Under some conditions, we obtained analytic expressions for the electron transfer rates and found the range of parameters for their applicability by comparing with the exact numerical simulations. We also compared the correlated and uncorrelated regimes and demonstrated numerically that the uncorrelated fluctuations of the protein environment can, under some conditions, either increase or decrease the electron transfer rates.

  20. Polarimetric signatures of a coniferous forest canopy based on vector radiative transfer theory

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Fung, A. K.; Amar, F.; Mougin, E.; Lopes, A.; Beaudoin, A.

    1992-01-01

    Complete polarization signatures of a coniferous forest canopy are studied by the iterative solution of the vector radiative transfer equations up to the second order. The forest canopy constituents (leaves, branches, stems, and trunk) are embedded in a multi-layered medium over a rough interface. The branches, stems and trunk scatterers are modeled as finite randomly oriented cylinders. The leaves are modeled as randomly oriented needles. For a plane wave exciting the canopy, the average Mueller matrix is formulated in terms of the iterative solution of the radiative transfer solution and used to determine the linearly polarized backscattering coefficients, the co-polarized and cross-polarized power returns, and the phase difference statistics. Numerical results are presented to investigate the effect of transmitting and receiving antenna configurations on the polarimetric signature of a pine forest. Comparison is made with measurements.

  1. Probability and Cumulative Density Function Methods for the Stochastic Advection-Reaction Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barajas-Solano, David A.; Tartakovsky, Alexandre M.

    We present a cumulative density function (CDF) method for the probabilistic analysis of $d$-dimensional advection-dominated reactive transport in heterogeneous media. We employ a probabilistic approach in which epistemic uncertainty on the spatial heterogeneity of Darcy-scale transport coefficients is modeled in terms of random fields with given correlation structures. Our proposed CDF method employs a modified Large-Eddy-Diffusivity (LED) approach to close and localize the nonlocal equations governing the one-point PDF and CDF of the concentration field, resulting in a $(d + 1)$ dimensional PDE. Compared to the classsical LED localization, the proposed modified LED localization explicitly accounts for the mean-field advectivemore » dynamics over the phase space of the PDF and CDF. To illustrate the accuracy of the proposed closure, we apply our CDF method to one-dimensional single-species reactive transport with uncertain, heterogeneous advection velocities and reaction rates modeled as random fields.« less

  2. The hydrogen diffusion in liquid aluminum alloys from ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    Jakse, N.; Pasturel, A.

    2014-09-01

    We study the hydrogen diffusion in liquid aluminum alloys through extensive ab initio molecular dynamics simulations. At the microscopic scale, we show that the hydrogen motion is characterized by a broad distribution of spatial jumps that does not correspond to a Brownian motion. To determine the self-diffusion coefficient of hydrogen in liquid aluminum alloys, we use a generalized continuous time random walk model recently developed to describe the hydrogen diffusion in pure aluminum. In particular, we show that the model successfully accounts the effects of alloying elements on the hydrogen diffusion in agreement with experimental features.

  3. Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Landman, Drew

    2015-01-01

    Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.

  4. Household water treatment in developing countries: comparing different intervention types using meta-regression.

    PubMed

    Hunter, Paul R

    2009-12-01

    Household water treatment (HWT) is being widely promoted as an appropriate intervention for reducing the burden of waterborne disease in poor communities in developing countries. A recent study has raised concerns about the effectiveness of HWT, in part because of concerns over the lack of blinding and in part because of considerable heterogeneity in the reported effectiveness of randomized controlled trials. This study set out to attempt to investigate the causes of this heterogeneity and so identify factors associated with good health gains. Studies identified in an earlier systematic review and meta-analysis were supplemented with more recently published randomized controlled trials. A total of 28 separate studies of randomized controlled trials of HWT with 39 intervention arms were included in the analysis. Heterogeneity was studied using the "metareg" command in Stata. Initial analyses with single candidate predictors were undertaken and all variables significant at the P < 0.2 level were included in a final regression model. Further analyses were done to estimate the effect of the interventions over time by MonteCarlo modeling using @Risk and the parameter estimates from the final regression model. The overall effect size of all unblinded studies was relative risk = 0.56 (95% confidence intervals 0.51-0.63), but after adjusting for bias due to lack of blinding the effect size was much lower (RR = 0.85, 95% CI = 0.76-0.97). Four main variables were significant predictors of effectiveness of intervention in a multipredictor meta regression model: Log duration of study follow-up (regression coefficient of log effect size = 0.186, standard error (SE) = 0.072), whether or not the study was blinded (coefficient 0.251, SE 0.066) and being conducted in an emergency setting (coefficient -0.351, SE 0.076) were all significant predictors of effect size in the final model. Compared to the ceramic filter all other interventions were much less effective (Biosand 0.247, 0.073; chlorine and safe waste storage 0.295, 0.061; combined coagulant-chlorine 0.2349, 0.067; SODIS 0.302, 0.068). A Monte Carlo model predicted that over 12 months ceramic filters were likely to be still effective at reducing disease, whereas SODIS, chlorination, and coagulation-chlorination had little if any benefit. Indeed these three interventions are predicted to have the same or less effect than what may be expected due purely to reporting bias in unblinded studies With the currently available evidence ceramic filters are the most effective form of HWT in the longterm, disinfection-only interventions including SODIS appear to have poor if any longterm public health benefit.

  5. Impact of communities, health, and emotional-related factors on smoking use: comparison of joint modeling of mean and dispersion and Bayes' hierarchical models on add health survey.

    PubMed

    Pu, Jie; Fang, Di; Wilson, Jeffrey R

    2017-02-03

    The analysis of correlated binary data is commonly addressed through the use of conditional models with random effects included in the systematic component as opposed to generalized estimating equations (GEE) models that addressed the random component. Since the joint distribution of the observations is usually unknown, the conditional distribution is a natural approach. Our objective was to compare the fit of different binary models for correlated data in Tabaco use. We advocate that the joint modeling of the mean and dispersion may be at times just as adequate. We assessed the ability of these models to account for the intraclass correlation. In so doing, we concentrated on fitting logistic regression models to address smoking behaviors. Frequentist and Bayes' hierarchical models were used to predict conditional probabilities, and the joint modeling (GLM and GAM) models were used to predict marginal probabilities. These models were fitted to National Longitudinal Study of Adolescent to Adult Health (Add Health) data for Tabaco use. We found that people were less likely to smoke if they had higher income, high school or higher education and religious. Individuals were more likely to smoke if they had abused drug or alcohol, spent more time on TV and video games, and been arrested. Moreover, individuals who drank alcohol early in life were more likely to be a regular smoker. Children who experienced mistreatment from their parents were more likely to use Tabaco regularly. The joint modeling of the mean and dispersion models offered a flexible and meaningful method of addressing the intraclass correlation. They do not require one to identify random effects nor distinguish from one level of the hierarchy to the other. Moreover, once one can identify the significant random effects, one can obtain similar results to the random coefficient models. We found that the set of marginal models accounting for extravariation through the additional dispersion submodel produced similar results with regards to inferences and predictions. Moreover, both marginal and conditional models demonstrated similar predictive power.

  6. The effects of intermittency on statistical characteristics of turbulence and scale similarity of breakdown coefficients

    NASA Astrophysics Data System (ADS)

    Novikov, E. A.

    1990-05-01

    The influence of intermittency on turbulent diffusion is expressed in terms of the statistics of the dissipation field. The high-order moments of relative diffusion are obtained by using the concept of scale similarity of the breakdown coefficients (bdc). The method of bdc is useful for obtaining new models and general results, which then can be expressed in terms of multifractals. In particular, the concavity and other properties of spectral codimension are proved. Special attention is paid to the logarithmically periodic modulations. The parametrization of small-scale intermittent turbulence, which can be used for large-eddy simulation, is presented. The effect of molecular viscosity is taken into account in the spirit of the renorm group, but without spectral series, ɛ expansion, and fictitious random forces.

  7. Simulating intrafraction prostate motion with a random walk model.

    PubMed

    Pommer, Tobias; Oh, Jung Hun; Munck Af Rosenschöld, Per; Deasy, Joseph O

    2017-01-01

    Prostate motion during radiation therapy (ie, intrafraction motion) can cause unwanted loss of radiation dose to the prostate and increased dose to the surrounding organs at risk. A compact but general statistical description of this motion could be useful for simulation of radiation therapy delivery or margin calculations. We investigated whether prostate motion could be modeled with a random walk model. Prostate motion recorded during 548 radiation therapy fractions in 17 patients was analyzed and used for input in a random walk prostate motion model. The recorded motion was categorized on the basis of whether any transient excursions (ie, rapid prostate motion in the anterior and superior direction followed by a return) occurred in the trace and transient motion. This was separately modeled as a large step in the anterior/superior direction followed by a returning large step. Random walk simulations were conducted with and without added artificial transient motion using either motion data from all observed traces or only traces without transient excursions as model input, respectively. A general estimate of motion was derived with reasonable agreement between simulated and observed traces, especially during the first 5 minutes of the excursion-free simulations. Simulated and observed diffusion coefficients agreed within 0.03, 0.2 and 0.3 mm 2 /min in the left/right, superior/inferior, and anterior/posterior directions, respectively. A rapid increase in variance at the start of observed traces was difficult to reproduce and seemed to represent the patient's need to adjust before treatment. This could be estimated somewhat using artificial transient motion. Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during delivery of radiation therapy.

  8. Measurement of the absorption coefficient using the sound-intensity technique

    NASA Technical Reports Server (NTRS)

    Atwal, M.; Bernhard, R.

    1984-01-01

    The possibility of using the sound intensity technique to measure the absorption coefficient of a material is investigated. This technique measures the absorption coefficient by measuring the intensity incident on the sample and the net intensity reflected by the sample. Results obtained by this technique are compared with the standard techniques of measuring the change in the reverberation time and the standing wave ratio in a tube, thereby, calculating the random incident and the normal incident adsorption coefficient.

  9. Chapman-Enskog expansion for the Vicsek model of self-propelled particles

    NASA Astrophysics Data System (ADS)

    Ihle, Thomas

    2016-08-01

    Using the standard Vicsek model, I show how the macroscopic transport equations can be systematically derived from microscopic collision rules. The approach starts with the exact evolution equation for the N-particle probability distribution and, after making the mean-field assumption of molecular chaos, leads to a multi-particle Enskog-type equation. This equation is treated by a non-standard Chapman-Enskog expansion to extract the macroscopic behavior. The expansion includes terms up to third order in a formal expansion parameter ɛ, and involves a fast time scale. A self-consistent closure of the moment equations is presented that leads to a continuity equation for the particle density and a Navier-Stokes-like equation for the momentum density. Expressions for all transport coefficients in these macroscopic equations are given explicitly in terms of microscopic parameters of the model. The transport coefficients depend on specific angular integrals which are evaluated asymptotically in the limit of infinitely many collision partners, using an analogy to a random walk. The consistency of the Chapman-Enskog approach is checked by an independent calculation of the shear viscosity using a Green-Kubo relation.

  10. Optical parameters of Ge15Sb5Se80 and Ge15Sb5Te80 from ellipsometric measurements

    NASA Astrophysics Data System (ADS)

    Abdel-Wahab, F.; Ashraf, I. M.; Alomairy, S. E.

    2018-02-01

    The optical properties of Ge15Sb5Se80 (GSS) and Ge15Sb5Te80 (GST) films prepared by thermal evaporation method were investigated in the photon energy range from 0.9 eV to 5 eV by using a variable-angle spectroscopic ellipsometer. Combinations of multiple Gaussian, and Tauc-Lorentz or Cody-Lorentz dispersion functions are used to fit the experimental data. The models' parameters (Lorentz oscillator amplitude, resonance energy, oscillator width, optical band gap, and Urbach energy) of both GSS and GST films were calculated. Refractive indices and extinction coefficients of the films were determined. Analysis of the absorption coefficient shows that the optical absorption edge of GSS and GST films to be 1.6 eV and 0.89 eV, respectively. Manca's relation based on mean bond energy and the bond statistics of chemically ordered model (COM) and random covalent network model (CRNM) is applied for the estimation of the optical band gap (Eg) of the investigated films. A good agreement between experimental and calculated Eg is obtained.

  11. [Using ultraviolet-visible ( UV-Vis) absorption spectrum to estimate the dissolved organic matter (DOM) concentration in water, soils and sediments of typical water-level fluctuation zones of the Three Gorges Reservoir areas].

    PubMed

    Li, Lu-lu; Jiang, Tao; Lu, Song; Yan, Jin-long; Gao, Jie; Wei, Shi-qiang; Wang, Ding-yong; Guo, Nian; Zhao, Zhena

    2014-09-01

    Dissolved organic matter (DOM) is a very important component in terrestrial ecosystem. Chromophoric dissolved organic matter (CDOM) is a significant constituent of DOM, which can be measured by ultraviolet-visible (UV-Vis) absorption spectrum. Thus the relationship between CDOM and DOM was investigated and established by several types of models including single-wavelength model, double-wavelength model, absorption spectrum slope (S value) model and three-wavelength model, based on the UV-Vis absorption coefficients of soil and sediment samples (sampled in July of 2012) and water samples (sampled in November of 2012) respectively. The results suggested that the three-wavelength model was the best for fitting, and the determination coefficients of water, soil and sediment data were 0. 788, 0. 933 and 0. 856, respectively. Meanwhile, the nominal best model was validated with the UV-Vis data of 32 soil samples and 36 water samples randomly collected in 2013, showing the RRMSE and MRE were 16. 5% and 16. 9% respectively for soil DOM samples, 10. 32% and 9. 06% respectively for water DOM samples, which further suggested the prediction accuracy was higher in water DOM samples as compared with that in soil DOM samples.

  12. Marginalized zero-altered models for longitudinal count data.

    PubMed

    Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A

    2016-10-01

    Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.

  13. Marginalized zero-altered models for longitudinal count data

    PubMed Central

    Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.

    2015-01-01

    Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423

  14. Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States

    NASA Astrophysics Data System (ADS)

    Yang, J.; Astitha, M.; Schwartz, C. S.

    2017-12-01

    Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.

  15. Effectiveness Trial of Community-Based I Choose Life-Africa Human Immunodeficiency Virus Prevention Program in Kenya

    PubMed Central

    Adam, Mary B.

    2014-01-01

    We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. PMID:24957544

  16. Effectiveness trial of community-based I Choose Life-Africa human immunodeficiency virus prevention program in Kenya.

    PubMed

    Adam, Mary B

    2014-09-01

    We measured the effectiveness of a human immunodeficiency virus (HIV) prevention program developed in Kenya and carried out among university students. A total of 182 student volunteers were randomized into an intervention group who received a 32-hour training course as HIV prevention peer educators and a control group who received no training. Repeated measures assessed HIV-related attitudes, intentions, knowledge, and behaviors four times over six months. Data were analyzed by using linear mixed models to compare the rate of change on 13 dependent variables that examined sexual risk behavior. Based on multi-level models, the slope coefficients for four variables showed reliable change in the hoped for direction: abstinence from oral, vaginal, or anal sex in the last two months, condom attitudes, HIV testing, and refusal skill. The intervention demonstrated evidence of non-zero slope coefficients in the hoped for direction on 12 of 13 dependent variables. The intervention reduced sexual risk behavior. © The American Society of Tropical Medicine and Hygiene.

  17. Investigation of Biotransport in a Tumor With Uncertain Material Properties Using a Nonintrusive Spectral Uncertainty Quantification Method.

    PubMed

    Alexanderian, Alen; Zhu, Liang; Salloum, Maher; Ma, Ronghui; Yu, Meilin

    2017-09-01

    In this study, statistical models are developed for modeling uncertain heterogeneous permeability and porosity in tumors, and the resulting uncertainties in pressure and velocity fields during an intratumoral injection are quantified using a nonintrusive spectral uncertainty quantification (UQ) method. Specifically, the uncertain permeability is modeled as a log-Gaussian random field, represented using a truncated Karhunen-Lòeve (KL) expansion, and the uncertain porosity is modeled as a log-normal random variable. The efficacy of the developed statistical models is validated by simulating the concentration fields with permeability and porosity of different uncertainty levels. The irregularity in the concentration field bears reasonable visual agreement with that in MicroCT images from experiments. The pressure and velocity fields are represented using polynomial chaos (PC) expansions to enable efficient computation of their statistical properties. The coefficients in the PC expansion are computed using a nonintrusive spectral projection method with the Smolyak sparse quadrature. The developed UQ approach is then used to quantify the uncertainties in the random pressure and velocity fields. A global sensitivity analysis is also performed to assess the contribution of individual KL modes of the log-permeability field to the total variance of the pressure field. It is demonstrated that the developed UQ approach can effectively quantify the flow uncertainties induced by uncertain material properties of the tumor.

  18. Effect of α-stable sorptive waiting times on microbial transport in microflow cells

    NASA Astrophysics Data System (ADS)

    Bonilla, F. Alejandro; Cushman, John H.

    2002-09-01

    The interaction of bacteria in the fluid phase with pore walls of a porous material involves a wide range of effective reaction times which obey a diversity of substrate-bacteria adhesion conditions, and adhesive mechanisms. For a transported species, this heterogeneity in sorption conditions occurs both in time and space. Modern experimental methods allow one to measure adhesive reaction times of individual bacteria. This detailed information may be incorporated into nonequilibrium transport-sorption models that capture the heterogeneity in reaction times caused by varying chemical conditions. We have carried out particle (Brownian dynamic) simulations of adhesive, self-motile bacteria convected between two infinite plates as a model for a microflow cell. The adhesive heterogeneity is included by introducing adhesive reaction time (understood as time spent at a solid boundary once the particle collides against it) as a random variable that can be infinite (irreversible sorption) or vary over a wide range of values. This is made possible by treating this reaction time random variable as having an α-stable probability distribution whose properties (e.g., infinite moments and long tails) are distinctive from the standard exponential distribution commonly used to model reversible sorption. In addition, the α-stable distribution is renormalizable and hence upscalable to complex porous media. Simulations are performed in a pressure-driven microflow cell. Bacteria motility (driven by an effective Brownian force) acts as a dispersive component in the convective field. Upon collision with the pore wall, bacteria attachment or detachment occurs. The time bacteria spend at the wall varies over a wide range of time scales. This model has the advantage of being parsimonious, that is, involving very few parameters to model complex irreversible or reversible adhesion in heterogeneous environments. It is shown that, as in Taylor dispersion, the ratio of the channel half width b to the Brownian bacteria motility coefficient (D0 or dispersion coefficient) tb=b2/D0 controls the different adhesion regimes along with the value of α. Universal scalings (with respect to dimensionless time t*=t/tb) for the mean position, =V*efftθ*, and mean-square displacement, <ΔX2>=D*efftγ* exist for long-time dispersion and the coefficients were obtained. The model can account for a great many sorptive processes including reversible and irreversible sorption, and sub- and superdispersive regimes with just a few parameters.

  19. Random attractor of non-autonomous stochastic Boussinesq lattice system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Min, E-mail: zhaomin1223@126.com; Zhou, Shengfan, E-mail: zhoushengfan@yahoo.com

    2015-09-15

    In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.

  20. Impact of parametric uncertainty on estimation of the energy deposition into an irradiated brain tumor

    NASA Astrophysics Data System (ADS)

    Taverniers, Søren; Tartakovsky, Daniel M.

    2017-11-01

    Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.

  1. Patient satisfaction with ambulatory care in Germany: effects of patient- and medical practice-related factors.

    PubMed

    Auras, Silke; Ostermann, Thomas; de Cruppé, Werner; Bitzer, Eva-Maria; Diel, Franziska; Geraedts, Max

    2016-12-01

    The study aimed to illustrate the effect of the patients' sex, age, self-rated health and medical practice specialization on patient satisfaction. Secondary analysis of patient survey data using multilevel analysis (generalized linear mixed model, medical practice as random effect) using a sequential modelling strategy. We examined the effects of the patients' sex, age, self-rated health and medical practice specialization on four patient satisfaction dimensions: medical practice organization, information, interaction, professional competence. The study was performed in 92 German medical practices providing ambulatory care in general medicine, internal medicine or gynaecology. In total, 9888 adult patients participated in a patient survey using the validated 'questionnaire on satisfaction with ambulatory care-quality from the patient perspective [ZAP]'. We calculated four models for each satisfaction dimension, revealing regression coefficients with 95% confidence intervals (CIs) for all independent variables, and using Wald Chi-Square statistic for each modelling step (model validity) and LR-Tests to compare the models of each step with the previous model. The patients' sex and age had a weak effect (maximum regression coefficient 1.09, CI 0.39; 1.80), and the patients' self-rated health had the strongest positive effect (maximum regression coefficient 7.66, CI 6.69; 8.63) on satisfaction ratings. The effect of medical practice specialization was heterogeneous. All factors studied, specifically the patients' self-rated health, affected patient satisfaction. Adjustment should always be considered because it improves the comparability of patient satisfaction in medical practices with atypically varying patient populations and increases the acceptance of comparisons. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  2. Constraining Saturn's interior density profile from precision gravity field measurement obtained during Grand Finale

    NASA Astrophysics Data System (ADS)

    Movshovitz, N.; Fortney, J. J.; Helled, R.; Hubbard, W. B.; Mankovich, C.; Thorngren, D.; Wahl, S. M.; Militzer, B.; Durante, D.

    2017-12-01

    The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properlyinterpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about theformation mechanism of the planet. Recently, very high precision measurements of the gravity coefficients for Saturn have been made by the radio science instrument on the Cassini spacecraft during its Grand Finale orbits. The resulting coefficients come with an associated uncertainty. The task of matching a given density profile to a given set of gravity coefficients is relatively straightforward, but the question of how to best account for the uncertainty is not. In essentially all prior work on matching models to gravity field data inferences about planetary structure have rested on assumptions regarding the imperfectly known H/He equation of state and the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet constrained by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also Bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly constrained by them. We apply this approach to produce a sample of Saturn interior models based on gravity data from Grand Finale orbits and discuss their implications.

  3. Random walk study of electron motion in helium in crossed electromagnetic fields

    NASA Technical Reports Server (NTRS)

    Englert, G. W.

    1972-01-01

    Random walk theory, previously adapted to electron motion in the presence of an electric field, is extended to include a transverse magnetic field. In principle, the random walk approach avoids mathematical complexity and concomitant simplifying assumptions and permits determination of energy distributions and transport coefficients within the accuracy of available collisional cross section data. Application is made to a weakly ionized helium gas. Time of relaxation of electron energy distribution, determined by the random walk, is described by simple expressions based on energy exchange between the electron and an effective electric field. The restrictive effect of the magnetic field on electron motion, which increases the required number of collisions per walk to reach a terminal steady state condition, as well as the effect of the magnetic field on electron transport coefficients and mean energy can be quite adequately described by expressions involving only the Hall parameter.

  4. Dual control of flow field heterogeneity and immobile porosity on non-Fickian transport in Berea sandstone

    NASA Astrophysics Data System (ADS)

    Gjetvaj, Filip; Russian, Anna; Gouze, Philippe; Dentz, Marco

    2015-10-01

    Both flow field heterogeneity and mass transfer between mobile and immobile domains have been studied separately for explaining observed anomalous transport. Here we investigate non-Fickian transport using high-resolution 3-D X-ray microtomographic images of Berea sandstone containing microporous cement with pore size below the setup resolution. Transport is computed for a set of representative elementary volumes and results from advection and diffusion in the resolved macroporosity (mobile domain) and diffusion in the microporous phase (immobile domain) where the effective diffusion coefficient is calculated from the measured local porosity using a phenomenological model that includes a porosity threshold (ϕθ) below which diffusion is null and the exponent n that characterizes tortuosity-porosity power-law relationship. We show that both flow field heterogeneity and microporosity trigger anomalous transport. Breakthrough curve (BTC) tailing is positively correlated to microporosity volume and mobile-immobile interface area. The sensitivity analysis showed that the BTC tailing increases with the value of ϕθ, due to the increase of the diffusion path tortuosity until the volume of the microporosity becomes negligible. Furthermore, increasing the value of n leads to an increase in the standard deviation of the distribution of effective diffusion coefficients, which in turn results in an increase of the BTC tailing. Finally, we propose a continuous time random walk upscaled model where the transition time is the sum of independently distributed random variables characterized by specific distributions. It allows modeling a 1-D equivalent macroscopic transport honoring both the control of the flow field heterogeneity and the multirate mass transfer between mobile and immobile domains.

  5. Not accounting for interindividual variability can mask habitat selection patterns: a case study on black bears.

    PubMed

    Lesmerises, Rémi; St-Laurent, Martin-Hugues

    2017-11-01

    Habitat selection studies conducted at the population scale commonly aim to describe general patterns that could improve our understanding of the limiting factors in species-habitat relationships. Researchers often consider interindividual variation in selection patterns to control for its effects and avoid pseudoreplication by using mixed-effect models that include individuals as random factors. Here, we highlight common pitfalls and possible misinterpretations of this strategy by describing habitat selection of 21 black bears Ursus americanus. We used Bayesian mixed-effect models and compared results obtained when using random intercept (i.e., population level) versus calculating individual coefficients for each independent variable (i.e., individual level). We then related interindividual variability to individual characteristics (i.e., age, sex, reproductive status, body condition) in a multivariate analysis. The assumption of comparable behavior among individuals was verified only in 40% of the cases in our seasonal best models. Indeed, we found strong and opposite responses among sampled bears and individual coefficients were linked to individual characteristics. For some covariates, contrasted responses canceled each other out at the population level. In other cases, interindividual variability was concealed by the composition of our sample, with the majority of the bears (e.g., old individuals and bears in good physical condition) driving the population response (e.g., selection of young forest cuts). Our results stress the need to consider interindividual variability to avoid misinterpretation and uninformative results, especially for a flexible and opportunistic species. This study helps to identify some ecological drivers of interindividual variability in bear habitat selection patterns.

  6. Pharmacodynamics and Medicinal Chemistry of an External Chinese Herbal Formula for Mammary Precancerous Lesions

    PubMed Central

    Zhang, Guijuan; Ma, Yi; Fan, Hongxia

    2017-01-01

    Ruyan Neixiao Cream (RYNXC) is a traditional Chinese herbal formula for treating mammary precancerous disease. This study was carried out to investigate in vivo anticancer effect of RYNXC and multiple constituents. 32 virginal Sprague-Dawley rats were randomly divided into blank control group (BC), mammary precancer models group (MODEL), tamoxifen group (TAM), and Ruyan Neixiao Cream group (RYNXC). TAM was intervened by tamoxifen; RYNXC was intervened by Ruyan Neixiao Cream. The chromatographic separation was performed by high performance liquid chromatography (HPLC) coupled with mass spectrometry (MS). RYNXC showed significant improvement in erythrocyte aggregation index (EAI), hematocrit (HCT), fibrinogen (FIB), spleen coefficient, and uterus coefficient compared with MODEL. In RYNXC and TAM groups, atypical hyperplasia was observed in pathological mammary tissues; meanwhile in MODEL group, ductal carcinoma was observed in situ. Moreover, fifteen compounds were characterized according to HPLC-MS data, including organic acids, tannin, alkaloid, volatile oil, anthraquinones, and flavonoids. The study suggests that RYNXC was an effective Chinese herbal formula for mammary precancerous lesions and provides a scientific basis for the quality standard and the pharmacology of RYNXC. It will be beneficial to the future clinical application of RYNXC. PMID:28811827

  7. Simple point vortex model for the relaxation of 2D superfluid turbulence in a Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Kim, Joon Hyun; Kwon, Woo Jin; Shin, Yong-Il

    2016-05-01

    In a recent experiment, it was found that the dissipative evolution of a corotating vortex pair in a trapped Bose-Einstein condensate is well described by a point vortex model with longitudinal friction on the vortex motion and the thermal friction coefficient was determined as a function of sample temperature. In this poster, we present a numerical study on the relaxation of 2D superfluid turbulence based on the dissipative point vortex model. We consider a homogeneous system in a cylindrical trap having randomly distributed vortices and implement the vortex-antivortex pair annihilation by removing a pair when its separation becomes smaller than a certain threshold value. We characterize the relaxation of the turbulent vortex states with the decay time required for the vortex number to be reduced to a quarter of initial number. We find the vortex decay time is inversely proportional to the thermal friction coefficient. In particular, we observe the decay times obtained from this work show good quantitative agreement with the experimental results in, indicating that in spite of its simplicity, the point vortex model reasonably captures the physics in the relaxation dynamics of the real system.

  8. Linear mixed-effects models to describe length-weight relationships for yellow croaker (Larimichthys Polyactis) along the north coast of China.

    PubMed

    Ma, Qiuyun; Jiao, Yan; Ren, Yiping

    2017-01-01

    In this study, length-weight relationships and relative condition factors were analyzed for Yellow Croaker (Larimichthys polyactis) along the north coast of China. Data covered six regions from north to south: Yellow River Estuary, Coastal Waters of Northern Shandong, Jiaozhou Bay, Coastal Waters of Qingdao, Haizhou Bay, and South Yellow Sea. In total 3,275 individuals were collected during six years (2008, 2011-2015). One generalized linear model, two simply linear models and nine linear mixed effect models that applied the effects from regions and/or years to coefficient a and/or the exponent b were studied and compared. Among these twelve models, the linear mixed effect model with random effects from both regions and years fit the data best, with lowest Akaike information criterion value and mean absolute error. In this model, the estimated a was 0.0192, with 95% confidence interval 0.0178~0.0308, and the estimated exponent b was 2.917 with 95% confidence interval 2.731~2.945. Estimates for a and b with the random effects in intercept and coefficient from Region and Year, ranged from 0.013 to 0.023 and from 2.835 to 3.017, respectively. Both regions and years had effects on parameters a and b, while the effects from years were shown to be much larger than those from regions. Except for Coastal Waters of Northern Shandong, a decreased from north to south. Condition factors relative to reference years of 1960, 1986, 2005, 2007, 2008~2009 and 2010 revealed that the body shape of Yellow Croaker became thinner in recent years. Furthermore relative condition factors varied among months, years, regions and length. The values of a and relative condition factors decreased, when the environmental pollution became worse, therefore, length-weight relationships could be an indicator for the environment quality. Results from this study provided basic description of current condition of Yellow Croaker along the north coast of China.

  9. Combination of inquiry learning model and computer simulation to improve mastery concept and the correlation with critical thinking skills (CTS)

    NASA Astrophysics Data System (ADS)

    Nugraha, Muhamad Gina; Kaniawati, Ida; Rusdiana, Dadi; Kirana, Kartika Hajar

    2016-02-01

    Among the purposes of physics learning at high school is to master the physics concepts and cultivate scientific attitude (including critical attitude), develop inductive and deductive reasoning skills. According to Ennis et al., inductive and deductive reasoning skills are part of critical thinking. Based on preliminary studies, both of the competence are lack achieved, it is seen from student learning outcomes is low and learning processes that are not conducive to cultivate critical thinking (teacher-centered learning). One of learning model that predicted can increase mastery concepts and train CTS is inquiry learning model aided computer simulations. In this model, students were given the opportunity to be actively involved in the experiment and also get a good explanation with the computer simulations. From research with randomized control group pretest-posttest design, we found that the inquiry learning model aided computer simulations can significantly improve students' mastery concepts than the conventional (teacher-centered) method. With inquiry learning model aided computer simulations, 20% of students have high CTS, 63.3% were medium and 16.7% were low. CTS greatly contribute to the students' mastery concept with a correlation coefficient of 0.697 and quite contribute to the enhancement mastery concept with a correlation coefficient of 0.603.

  10. Novel formulation of the ℳ model through the Generalized-K distribution for atmospheric optical channels.

    PubMed

    Garrido-Balsells, José María; Jurado-Navas, Antonio; Paris, José Francisco; Castillo-Vazquez, Miguel; Puerta-Notario, Antonio

    2015-03-09

    In this paper, a novel and deeper physical interpretation on the recently published Málaga or ℳ statistical distribution is provided. This distribution, which is having a wide acceptance by the scientific community, models the optical irradiance scintillation induced by the atmospheric turbulence. Here, the analytical expressions previously published are modified in order to express them by a mixture of the known Generalized-K and discrete Binomial and Negative Binomial distributions. In particular, the probability density function (pdf) of the ℳ model is now obtained as a linear combination of these Generalized-K pdf, in which the coefficients depend directly on the parameters of the ℳ distribution. In this way, the Málaga model can be physically interpreted as a superposition of different optical sub-channels each of them described by the corresponding Generalized-K fading model and weighted by the ℳ dependent coefficients. The expressions here proposed are simpler than the equations of the original ℳ model and are validated by means of numerical simulations by generating ℳ -distributed random sequences and their associated histogram. This novel interpretation of the Málaga statistical distribution provides a valuable tool for analyzing the performance of atmospheric optical channels for every turbulence condition.

  11. Modelling Furrow Irrigation-Induced Erosion on a Sandy Loam Soil in Samaru, Northern Nigeria

    PubMed Central

    Dibal, Jibrin M.; Igbadun, H. E.; Ramalan, A. A.; Mudiare, O. J.

    2014-01-01

    Assessment of soil erosion and sediment yield in furrow irrigation is limited in Samaru-Zaria. Data was collected in 2009 and 2010 and was used to develop a dimensionless model for predicting furrow irrigation-induced erosion (FIIE) using the dimensional analyses approach considering stream size, furrow length, furrow width, soil infiltration rate, hydraulic shear stress, soil erodibility, and time flow of water in the furrows as the building components. One liter of water-sediment samples was collected from the furrows during irrigations from which sediment concentrations and soil erosion per furrow were calculated. Stream sizes Q (2.5, 1.5, and 0.5 l/s), furrow lengths X (90 and 45 m), and furrow widths W (0.75 and 0.9 m) constituted the experimental factors randomized in a split plot design with four replications. Water flow into and out of the furrows was measured using cutthroat flumes. The model produced reasonable predictions relative to field measurements with coefficient of determination R 2 in the neighborhood of 0.8, model prediction efficiency NSE (0.7000), high index of agreement (0.9408), and low coefficient of variability (0.4121). The model is most sensitive to water stream size. The variables in the model are easily measurable; this makes it better and easily adoptable. PMID:27471748

  12. Using structural equation modeling for network meta-analysis.

    PubMed

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.

  13. Depression, distress and self-efficacy: The impact on diabetes self-care practices.

    PubMed

    Devarajooh, Cassidy; Chinna, Karuthan

    2017-01-01

    The prevalence of type 2 diabetes is increasing in Malaysia, and people with diabetes have been reported to suffer from depression and diabetes distress which influences their self-efficacy in performing diabetes self-care practices. This interviewer administered, cross sectional study, conducted in the district of Hulu Selangor, Malaysia, involving 371 randomly selected patients with type 2 diabetes, recruited from 6 health clinics, aimed to examine a conceptual model regarding the association between depression, diabetes distress and self-efficacy with diabetes self-care practices using the partial least square approach of structural equation modeling. In this study, diabetes self-care practices were similar regardless of sex, age group, ethnicity, education level, diabetes complications or type of diabetes medication. This study found that self-efficacy had a direct effect on diabetes self-care practice (path coefficient = 0.438, p<0.001). Self-care was not directly affected by depression and diabetes distress, but indirectly by depression (path coefficient = -0.115, p<0.01) and diabetes distress (path coefficient = -0.122, p<0.001) via self-efficacy. In conclusion, to improve self-care practices, effort must be focused on enhancing self-efficacy levels, while not forgetting to deal with depression and diabetes distress, especially among those with poorer levels of self-efficacy.

  14. Depression, distress and self-efficacy: The impact on diabetes self-care practices

    PubMed Central

    2017-01-01

    The prevalence of type 2 diabetes is increasing in Malaysia, and people with diabetes have been reported to suffer from depression and diabetes distress which influences their self-efficacy in performing diabetes self-care practices. This interviewer administered, cross sectional study, conducted in the district of Hulu Selangor, Malaysia, involving 371 randomly selected patients with type 2 diabetes, recruited from 6 health clinics, aimed to examine a conceptual model regarding the association between depression, diabetes distress and self-efficacy with diabetes self-care practices using the partial least square approach of structural equation modeling. In this study, diabetes self-care practices were similar regardless of sex, age group, ethnicity, education level, diabetes complications or type of diabetes medication. This study found that self-efficacy had a direct effect on diabetes self-care practice (path coefficient = 0.438, p<0.001). Self-care was not directly affected by depression and diabetes distress, but indirectly by depression (path coefficient = -0.115, p<0.01) and diabetes distress (path coefficient = -0.122, p<0.001) via self-efficacy. In conclusion, to improve self-care practices, effort must be focused on enhancing self-efficacy levels, while not forgetting to deal with depression and diabetes distress, especially among those with poorer levels of self-efficacy. PMID:28362861

  15. Deep Learning Role in Early Diagnosis of Prostate Cancer

    PubMed Central

    Reda, Islam; Khalil, Ashraf; Elmogy, Mohammed; Abou El-Fetouh, Ahmed; Shalaby, Ahmed; Abou El-Ghar, Mohamed; Elmaghraby, Adel; Ghazal, Mohammed; El-Baz, Ayman

    2018-01-01

    The objective of this work is to develop a computer-aided diagnostic system for early diagnosis of prostate cancer. The presented system integrates both clinical biomarkers (prostate-specific antigen) and extracted features from diffusion-weighted magnetic resonance imaging collected at multiple b values. The presented system performs 3 major processing steps. First, prostate delineation using a hybrid approach that combines a level-set model with nonnegative matrix factorization. Second, estimation and normalization of diffusion parameters, which are the apparent diffusion coefficients of the delineated prostate volumes at different b values followed by refinement of those apparent diffusion coefficients using a generalized Gaussian Markov random field model. Then, construction of the cumulative distribution functions of the processed apparent diffusion coefficients at multiple b values. In parallel, a K-nearest neighbor classifier is employed to transform the prostate-specific antigen results into diagnostic probabilities. Finally, those prostate-specific antigen–based probabilities are integrated with the initial diagnostic probabilities obtained using stacked nonnegativity constraint sparse autoencoders that employ apparent diffusion coefficient–cumulative distribution functions for better diagnostic accuracy. Experiments conducted on 18 diffusion-weighted magnetic resonance imaging data sets achieved 94.4% diagnosis accuracy (sensitivity = 88.9% and specificity = 100%), which indicate the promising results of the presented computer-aided diagnostic system. PMID:29804518

  16. Genotypic gain with simultaneous selection of production, nutrition, and culinary traits in cowpea crosses and backcrosses using mixed models.

    PubMed

    Oliveira, D G; Rocha, M M; Damasceno-Silva, K J; Sá, F V; Lima, L R L; Resende, M D V

    2017-08-17

    The aim of this study was to estimate the genotypic gain with simultaneous selection of production, nutrition, and culinary traits in cowpea crosses and backcrosses and to compare different selection indexes. Eleven cowpea populations were evaluated in a randomized complete block design with four replications. Fourteen traits were evaluated, and the following parameters were estimated: genotypic variation coefficient, genotypic determination coefficient, experimental quality indicator and selection reliability, estimated genotypic values ​​- BLUE, genotypic correlation coefficient among traits, and genotypic gain with simultaneous selection of all traits. The genotypic gain was estimated based on tree selection indexes: classical, multiplicative, and the sum of ranks. The genotypic variation coefficient was higher than the environmental variation coefficient for the number of days to start flowering, plant type, the weight of one hundred grains, grain index, and protein concentration. The majority of the traits presented genotypic determination coefficient from medium to high magnitude. The identification of increases in the production components is associated with decreases in protein concentration, and the increase in precocity leads to decreases in protein concentration and cooking time. The index based on the sum of ranks was the best alternative for simultaneous selection of traits in the cowpea segregating populations resulting from the crosses and backcrosses evaluated, with emphasis on the F 4 BC 12 , F 4 C 21 , and F 4 C 12 populations, which had the highest genotypic gains.

  17. PET-CT image fusion using random forest and à-trous wavelet transform.

    PubMed

    Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo

    2018-03-01

    New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Robustness and structure of complex networks

    NASA Astrophysics Data System (ADS)

    Shao, Shuai

    This dissertation covers the two major parts of my PhD research on statistical physics and complex networks: i) modeling a new type of attack -- localized attack, and investigating robustness of complex networks under this type of attack; ii) discovering the clustering structure in complex networks and its influence on the robustness of coupled networks. Complex networks appear in every aspect of our daily life and are widely studied in Physics, Mathematics, Biology, and Computer Science. One important property of complex networks is their robustness under attacks, which depends crucially on the nature of attacks and the structure of the networks themselves. Previous studies have focused on two types of attack: random attack and targeted attack, which, however, are insufficient to describe many real-world damages. Here we propose a new type of attack -- localized attack, and study the robustness of complex networks under this type of attack, both analytically and via simulation. On the other hand, we also study the clustering structure in the network, and its influence on the robustness of a complex network system. In the first part, we propose a theoretical framework to study the robustness of complex networks under localized attack based on percolation theory and generating function method. We investigate the percolation properties, including the critical threshold of the phase transition pc and the size of the giant component Pinfinity. We compare localized attack with random attack and find that while random regular (RR) networks are more robust against localized attack, Erdoḧs-Renyi (ER) networks are equally robust under both types of attacks. As for scale-free (SF) networks, their robustness depends crucially on the degree exponent lambda. The simulation results show perfect agreement with theoretical predictions. We also test our model on two real-world networks: a peer-to-peer computer network and an airline network, and find that the real-world networks are much more vulnerable to localized attack compared with random attack. In the second part, we extend the tree-like generating function method to incorporating clustering structure in complex networks. We study the robustness of a complex network system, especially a network of networks (NON) with clustering structure in each network. We find that the system becomes less robust as we increase the clustering coefficient of each network. For a partially dependent network system, we also find that the influence of the clustering coefficient on network robustness decreases as we decrease the coupling strength, and the critical coupling strength qc, at which the first-order phase transition changes to second-order, increases as we increase the clustering coefficient.

  19. Comparison of experimental methods for estimating matrix diffusion coefficients for contaminant transport modeling

    DOE PAGES

    Telfeyan, Katherine Christina; Ware, Stuart Doug; Reimus, Paul William; ...

    2018-01-31

    Here, diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged frommore » 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.« less

  20. Comparison of experimental methods for estimating matrix diffusion coefficients for contaminant transport modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Telfeyan, Katherine Christina; Ware, Stuart Doug; Reimus, Paul William

    Here, diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged frommore » 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.« less

  1. Neither fixed nor random: weighted least squares meta-regression.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2017-03-01

    Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Molecular Modeling on Berberine Derivatives toward BuChE: An Integrated Study with Quantitative Structure-Activity Relationships Models, Molecular Docking, and Molecular Dynamics Simulations.

    PubMed

    Fang, Jiansong; Pang, Xiaocong; Wu, Ping; Yan, Rong; Gao, Li; Li, Chao; Lian, Wenwen; Wang, Qi; Liu, Ai-lin; Du, Guan-hua

    2016-05-01

    A dataset of 67 berberine derivatives for the inhibition of butyrylcholinesterase (BuChE) was studied based on the combination of quantitative structure-activity relationships models, molecular docking, and molecular dynamics methods. First, a series of berberine derivatives were reported, and their inhibitory activities toward butyrylcholinesterase (BuChE) were evaluated. By 2D- quantitative structure-activity relationships studies, the best model built by partial least-square had a conventional correlation coefficient of the training set (R(2)) of 0.883, a cross-validation correlation coefficient (Qcv2) of 0.777, and a conventional correlation coefficient of the test set (Rpred2) of 0.775. The model was also confirmed by Y-randomization examination. In addition, the molecular docking and molecular dynamics simulation were performed to better elucidate the inhibitory mechanism of three typical berberine derivatives (berberine, C2, and C55) toward BuChE. The predicted binding free energy results were consistent with the experimental data and showed that the van der Waals energy term (ΔEvdw) difference played the most important role in differentiating the activity among the three inhibitors (berberine, C2, and C55). The developed quantitative structure-activity relationships models provide details on the fine relationship linking structure and activity and offer clues for structural modifications, and the molecular simulation helps to understand the inhibitory mechanism of the three typical inhibitors. In conclusion, the results of this study provide useful clues for new drug design and discovery of BuChE inhibitors from berberine derivatives. © 2015 John Wiley & Sons A/S.

  3. MANCOVA for one way classification with homogeneity of regression coefficient vectors

    NASA Astrophysics Data System (ADS)

    Mokesh Rayalu, G.; Ravisankar, J.; Mythili, G. Y.

    2017-11-01

    The MANOVA and MANCOVA are the extensions of the univariate ANOVA and ANCOVA techniques to multidimensional or vector valued observations. The assumption of a Gaussian distribution has been replaced with the Multivariate Gaussian distribution for the vectors data and residual term variables in the statistical models of these techniques. The objective of MANCOVA is to determine if there are statistically reliable mean differences that can be demonstrated between groups later modifying the newly created variable. When randomization assignment of samples or subjects to groups is not possible, multivariate analysis of covariance (MANCOVA) provides statistical matching of groups by adjusting dependent variables as if all subjects scored the same on the covariates. In this research article, an extension has been made to the MANCOVA technique with more number of covariates and homogeneity of regression coefficient vectors is also tested.

  4. Cellular polypropylene polymer foam as air-coupled ultrasonic transducer materials.

    PubMed

    Satyanarayan, L; Haberman, Michael R; Berthelot, Yves H

    2010-10-01

    Cellular polypropylene polymer foams, also known as ferroelectrets, are compelling candidates for air-coupled ultrasonic transducer materials because of their excellent acoustic impedance match to air and because they have a piezoelectric d(33) coefficient superior to that of PVDF. This study investigates the performance of ferroelectret transducers in the generation and reception of ultrasonic waves in air. As previous studies have noted, the piezoelectric coupling coefficients of these foams depend on the number, size, and distribution of charged voids in the microstructure. The present work studies the influence of these parameters both theoretically and experimentally. First, a three-dimensional model is employed to explain the variation of piezoelectric coupling coefficients, elastic stiffness, and dielectric permittivity as a function of void fraction based on void-scale physics and void geometry. Laser Doppler vibrometer (LDV) measurements of the effective d(33) coefficient of a specially fabricated prototype transmitting transducer are then shown which clearly indicate that the charged voids in the ferroelectret material are randomly distributed in the plane of the foam. The frequency-dependent dynamic d(33) coefficient is then reported from 50 to 500 kHz for different excitation voltages and shown to be largely insensitive to drive voltage. Lastly, two ferroelectret transducers are operated in transmit-receive mode and the received signal is shown to accurately represent the corresponding signal generated by the transmitting transducer as measured using LDV.

  5. No evidence for inbreeding avoidance in a natural population of song sparrows (Melospiza melodia).

    PubMed

    Keller, L F; Arcese, P

    1998-09-01

    We studied mate choice and inbreeding avoidance a natural population of song sparrows (Melospiza melodia) on Mandarte Island, Canada. Inbreeding occurred regularly: 59% all matings were between known relatives. We tested for inbreeding avoidance by comparing the observed levels of inbreeding to those expected if mate choice had been random with respect to relatedness. Independent of our assumptions about the availability of mates in the random mating model, we found that the expected and observed distributions of inbreeding coefficients were similar, as was the expected and observed frequency of close (f >/= 0.125) inbreeding. Furthermore, there was no difference in relatedness observed pairs and those that would have resulted had birds mated instead with their nearest neighbors. The only evidence to suggest any inbreeding avoidance was a reduced rate of parent-offspring matings as compared to one random mating model but not the other. Hence, despite substantial inbreeding depression in this population, we found little evidence for inbreeding avoidance through mate choice. We present a simple model to suggest that variation in inbreeding avoidance behaviors in birds may arise from differences in survival rates: in species with low survival rates, the costs of forfeiting matings to avoid inbreeding may exceed the costs of inbreeding.

  6. Electromagnetic wave scattering from a forest or vegetation canopy - Ongoing research at the University of Texas at Arlington

    NASA Technical Reports Server (NTRS)

    Karam, Mostafa A.; Amar, Faouzi; Fung, Adrian K.

    1993-01-01

    The Wave Scattering Research Center at the University of Texas at Arlington has developed a scattering model for forest or vegetation, based on the theory of electromagnetic-wave scattering in random media. The model generalizes the assumptions imposed by earlier models, and compares well with measurements from several forest canopies. This paper gives a description of the model. It also indicates how the model elements are integrated to obtain the scattering characteristics of different forest canopies. The scattering characteristics may be displayed in the form of polarimetric signatures, represented by like- and cross-polarized scattering coefficients, for an elliptically-polarized wave, or in the form of signal-distribution curves. Results illustrating both types of scattering characteristics are given.

  7. Effect of damping on excitability of high-order normal modes. [for a large space telescope spacecraft

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.; Gates, R. M.; Straayer, J. W.

    1975-01-01

    The effect of localized structural damping on the excitability of higher-order large space telescope spacecraft modes is investigated. A preprocessor computer program is developed to incorporate Voigt structural joint damping models in a finite-element dynamic model. A postprocessor computer program is developed to select critical modes for low-frequency attitude control problems and for higher-frequency fine-stabilization problems. The selection is accomplished by ranking the flexible modes based on coefficients for rate gyro, position gyro, and optical sensor, and on image-plane motions due to sinusoidal or random PSD force and torque inputs.

  8. Magnetic Field Line Random Walk in Arbitrarily Stretched Isotropic Turbulence

    NASA Astrophysics Data System (ADS)

    Wongpan, P.; Ruffolo, D.; Matthaeus, W. H.; Rowlands, G.

    2006-12-01

    Many types of space and laboratory plasmas involve turbulent fluctuations with an approximately uniform mean magnetic field B_0, and the field line random walk plays an important role in guiding particle motions. Much of the relevant literature concerns isotropic turbulence, and has mostly been perturbative, i.e., for small fluctuations, or based on numerical simulations for specific conditions. On the other hand, solar wind turbulence is apparently anisotropic, and has been modeled as a sum of idealized two-dimensional and one dimensional (slab) components, but with the deficiency of containing no oblique wave vectors. In the present work, we address the above issues with non-perturbative analytic calculations of diffusive field line random walks for unpolarized, arbitrarily stretched isotropic turbulence, including the limits of nearly one-dimensional (highly stretched) and nearly two-dimensional (highly squashed) turbulence. We develop implicit analytic formulae for the diffusion coefficients D_x and D_z, two coupled integral equations in which D_x and D_z appear inside 3-dimensional integrals over all k-space, are solved numerically with the aid of Mathematica routines for specific cases. We can vary the parameters B0 and β, the stretching along z for constant turbulent energy. Furthermore, we obtain analytic closed-form solutions in all extreme cases. We obtain 0.54 < D_z/D_x < 2, indicating an approximately isotropic random walk even for very anisotropic (unpolarized) turbulence, a surprising result. For a given β, the diffusion coefficient vs. B0 can be described by a Padé approximant. We find quasilinear behavior at high B0 and percolative behavior at low B_0. Partially supported by a Sritrangthong Scholarship from the Faculty of Science, Mahidol University; the Thailand Research Fund; NASA Grant NNG05GG83G; and Thailand's Commission for Higher Education.

  9. FIBER AND INTEGRAL OPTICS: Mode composition of radiation in graded-index waveguides with random microbending of the axis

    NASA Astrophysics Data System (ADS)

    Valyaev, A. B.; Krivoshlykov, S. G.

    1989-06-01

    It is shown that the problem of investigating the mode composition of a partly coherent radiation beam in a randomly inhomogeneous medium can be reduced to a study of evolution of the energy of individual modes and of the coefficients of correlations between the modes. General expressions are obtained for the coupling coefficients of modes in a parabolic waveguide with a random microbending of the axis and an analysis is made of their evolution as a function of the excitation conditions. An estimate is obtained of the distance in which a steady-state energy distribution between the modes is established. Explicit expressions are obtained for the correlation function in the case when a waveguide is excited by off-axial Gaussian beams or Gauss-Hermite modes.

  10. Numerical modeling of nonlinear modulation of coda wave interferometry in a multiple scattering medium with the presence of a localized micro-cracked zone

    NASA Astrophysics Data System (ADS)

    Chen, Guangzhi; Pageot, Damien; Legland, Jean-Baptiste; Abraham, Odile; Chekroun, Mathieu; Tournat, Vincent

    2018-04-01

    The spectral element method is used to perform a parametric sensitivity study of the nonlinear coda wave interferometry (NCWI) method in a homogeneous sample with localized damage [1]. The influence of a strong pump wave on a localized nonlinear damage zone is modeled as modifications to the elastic properties of an effective damage zone (EDZ), depending on the pump wave amplitude. The local change of the elastic modulus and the attenuation coefficient have been shown to vary linearly with respect to the excitation amplitude of the pump wave as in previous experimental studies of Zhang et al. [2]. In this study, the boundary conditions of the cracks, i.e. clapping effects is taken into account in the modeling of the damaged zone. The EDZ is then modeled with random cracks of random orientations, new parametric studies are established to model the pump wave influence with two new parameters: the change of the crack length and the crack density. The numerical results reported constitute another step towards quantification and forecasting of the nonlinear acoustic response of a cracked material, which proves to be necessary for quantitative non-destructive evaluation.

  11. Recovering DC coefficients in block-based DCT.

    PubMed

    Uehara, Takeyuki; Safavi-Naini, Reihaneh; Ogunbona, Philip

    2006-11-01

    It is a common approach for JPEG and MPEG encryption systems to provide higher protection for dc coefficients and less protection for ac coefficients. Some authors have employed a cryptographic encryption algorithm for the dc coefficients and left the ac coefficients to techniques based on random permutation lists which are known to be weak against known-plaintext and chosen-ciphertext attacks. In this paper we show that in block-based DCT, it is possible to recover dc coefficients from ac coefficients with reasonable image quality and show the insecurity of image encryption methods which rely on the encryption of dc values using a cryptoalgorithm. The method proposed in this paper combines dc recovery from ac coefficients and the fact that ac coefficients can be recovered using a chosen ciphertext attack. We demonstrate that a method proposed by Tang to encrypt and decrypt MPEG video can be completely broken.

  12. Variogram methods for texture classification of atherosclerotic plaque ultrasound images

    NASA Astrophysics Data System (ADS)

    Jeromin, Oliver M.; Pattichis, Marios S.; Pattichis, Constantinos; Kyriacou, Efthyvoulos; Nicolaides, Andrew

    2006-03-01

    Stroke is the third leading cause of death in the western world and the major cause of disability in adults. The type and stenosis of extracranial carotid artery disease is often responsible for ischemic strokes, transient ischemic attacks (TIAs) or amaurosis fugax (AF). The identification and grading of stenosis can be done using gray scale ultrasound scans. The appearance of B-scan pictures containing various granular structures makes the use of texture analysis techniques suitable for computer assisted tissue characterization purposes. The objective of this study is to investigate the usefulness of variogram analysis in the assessment of ultrasound plague morphology. The variogram estimates the variance of random fields, from arbitrary samples in space. We explore stationary random field models based on the variogram, which can be applied in ultrasound plaque imaging leading to a Computer Aided Diagnosis (CAD) system for the early detection of symptomatic atherosclerotic plaques. Non-parametric tests on the variogram coefficients show that the cofficients coming from symptomatic versus asymptomatic plaques come from distinct distributions. Furthermore, we show significant improvement in class separation, when a log point-transformation is applied to the images, prior to variogram estimation. Model fitting using least squares is explored for anisotropic variograms along specific directions. Comparative classification results, show that variogram coefficients can be used for the early detection of symptomatic cases, and also exhibit the largest class distances between symptomatic and asymptomatic plaque images, as compared to over 60 other texture features, used in the literature.

  13. Demonstration of the Application of Composite Load Spectra (CLS) and Probabilistic Structural Analysis (PSAM) Codes to SSME Heat Exchanger Turnaround Vane

    NASA Technical Reports Server (NTRS)

    Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George

    2000-01-01

    This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.

  14. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    NASA Astrophysics Data System (ADS)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  15. Search for function coefficient distribution in traditional Chinese medicine network

    NASA Astrophysics Data System (ADS)

    He, Yue; Zhang, Peipei; Sun, Anzheng; Su, Beibei; He, Da-Ren

    2004-03-01

    We suggest a model for a simulation on development of traditional Chinese medicine system. Suppose there are a certain number of Chinese medicines. Each of them is given randomly a "function coefficient", which has a value between 0 and 1. The larger it is the stronger is its function for solving one healthy problem and serving as an "emperor" in a prescription formulation. The smaller it is the stronger is its function for harmonizing and/or accessorizing a prescription formulation. In every step of time a new medicine is discovered. With a probability, P(m), which is determined according to our statistical investigation results, it can produce a new prescription formulation with other m-1 medicines. We assume that the probability for choosing the function coefficients of these m medicines follow a distribution function, which is everywhere smooth. A program has been set up to perform a search for this function form so that the simulation results show a best agreement to our statistical data. We believe the result function form will be helpful for an understanding on real development of traditional Chinese medicine system.

  16. On a numerical method for solving integro-differential equations with variable coefficients with applications in finance

    NASA Astrophysics Data System (ADS)

    Kudryavtsev, O.; Rodochenko, V.

    2018-03-01

    We propose a new general numerical method aimed to solve integro-differential equations with variable coefficients. The problem under consideration arises in finance where in the context of pricing barrier options in a wide class of stochastic volatility models with jumps. To handle the effect of the correlation between the price and the variance, we use a suitable substitution for processes. Then we construct a Markov-chain approximation for the variation process on small time intervals and apply a maturity randomization technique. The result is a system of boundary problems for integro-differential equations with constant coefficients on the line in each vertex of the chain. We solve the arising problems using a numerical Wiener-Hopf factorization method. The approximate formulae for the factors are efficiently implemented by means of the Fast Fourier Transform. Finally, we use a recurrent procedure that moves backwards in time on the variance tree. We demonstrate the convergence of the method using Monte-Carlo simulations and compare our results with the results obtained by the Wiener-Hopf method with closed-form expressions of the factors.

  17. Persistent-random-walk approach to anomalous transport of self-propelled particles

    NASA Astrophysics Data System (ADS)

    Sadjadi, Zeinab; Shaebani, M. Reza; Rieger, Heiko; Santen, Ludger

    2015-06-01

    The motion of self-propelled particles is modeled as a persistent random walk. An analytical framework is developed that allows the derivation of exact expressions for the time evolution of arbitrary moments of the persistent walk's displacement. It is shown that the interplay of step length and turning angle distributions and self-propulsion produces various signs of anomalous diffusion at short time scales and asymptotically a normal diffusion behavior with a broad range of diffusion coefficients. The crossover from the anomalous short-time behavior to the asymptotic diffusion regime is studied and the parameter dependencies of the crossover time are discussed. Higher moments of the displacement distribution are calculated and analytical expressions for the time evolution of the skewness and the kurtosis of the distribution are presented.

  18. The ASMEx snow slab experiment: snow microwave radiative transfer (SMRT) model evaluation

    NASA Astrophysics Data System (ADS)

    Sandells, Melody; Löwe, Henning; Picard, Ghislain; Dumont, Marie; Essery, Richard; Floury, Nicolas; Kontu, Anna; Lemmetyinen, Juha; Maslanka, William; Mätzler, Christian; Morin, Samuel; Wiesmann, Andreas

    2017-04-01

    A major uncertainty in snow microwave modelling to date has been the treatment of the snow microstructure. Although observations of microstructural parameters such as the optical grain diameter, specific surface area and correlation length have improved drastically over the last few years, scale factors have been used to derive the parameters needed in microwave emission models from these observations. Previous work has shown that a major difference between electromagnetic models of scattering coefficients is due to the specific snow microstructure models used. The snow microwave radiative transfer model (SMRT) is a new model developed to advance understanding of the role of microstructure and isolate different assumptions in existing microwave models that collectively hinder interpretation of model intercomparison studies. SMRT is implemented in Python and is modular, thus allows switching between different representations in its various components. Here, the role of microstructure is examined with the Improved Born Approximation electromagnetic model. The model is evaluated against scattering and absorption coefficients derived from radiometer measurements of snow slabs taken as part of the Arctic Snow Microstructure Experiment (ASMEx), which took place in Sodankylä, Finland over two seasons. Microtomography observations of slab samples were used to determine parameters for five microstructure models: spherical, exponential, sticky hard sphere, Teubner-Strey and Gaussian random field. SMRT brightness temperature simulations are also compared with radiometric observations of the snow slabs over a reflector plate and an absorber substrate. Agreement between simulations and observations is generally good except for slabs that are highly anisotropic.

  19. UNSTEADY DISPERSION IN RANDOM INTERMITTENT FLOW

    EPA Science Inventory

    The longitudinal dispersion coefficient of a conservative tracer was calculated from flow tests in a dead-end pipe loop system. Flow conditions for these tests ranged from laminar to transitional flow, and from steady to intermittent and random. Two static mixers linked in series...

  20. Evaluation of seasonal and spatial variations of lumped water balance model sensitivity to precipitation data errors

    NASA Astrophysics Data System (ADS)

    Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.

    2006-06-01

    Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.

  1. Mixed-Poisson Point Process with Partially-Observed Covariates: Ecological Momentary Assessment of Smoking.

    PubMed

    Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul

    2012-01-01

    Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.

  2. Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S. V.

    2016-12-01

    Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.

  3. Criterion and Concurrent Validity of the activPAL™ Professional Physical Activity Monitor in Adolescent Females

    PubMed Central

    Dowd, Kieran P.; Harrington, Deirdre M.; Donnelly, Alan E.

    2012-01-01

    Background The activPAL has been identified as an accurate and reliable measure of sedentary behaviour. However, only limited information is available on the accuracy of the activPAL activity count function as a measure of physical activity, while no unit calibration of the activPAL has been completed to date. This study aimed to investigate the criterion validity of the activPAL, examine the concurrent validity of the activPAL, and perform and validate a value calibration of the activPAL in an adolescent female population. The performance of the activPAL in estimating posture was also compared with sedentary thresholds used with the ActiGraph accelerometer. Methodologies Thirty adolescent females (15 developmental; 15 cross-validation) aged 15–18 years performed 5 activities while wearing the activPAL, ActiGraph GT3X, and the Cosmed K4B2. A random coefficient statistics model examined the relationship between metabolic equivalent (MET) values and activPAL counts. Receiver operating characteristic analysis was used to determine activity thresholds and for cross-validation. The random coefficient statistics model showed a concordance correlation coefficient of 0.93 (standard error of the estimate = 1.13). An optimal moderate threshold of 2997 was determined using mixed regression, while an optimal vigorous threshold of 8229 was determined using receiver operating statistics. The activPAL count function demonstrated very high concurrent validity (r = 0.96, p<0.01) with the ActiGraph count function. Levels of agreement for sitting, standing, and stepping between direct observation and the activPAL and ActiGraph were 100%, 98.1%, 99.2% and 100%, 0%, 100%, respectively. Conclusions These findings suggest that the activPAL is a valid, objective measurement tool that can be used for both the measurement of physical activity and sedentary behaviours in an adolescent female population. PMID:23094069

  4. Plum pudding random medium model of biological tissue toward remote microscopy from spectroscopic light scattering

    PubMed Central

    Xu, Min

    2017-01-01

    Biological tissue has a complex structure and exhibits rich spectroscopic behavior. There has been no tissue model until now that has been able to account for the observed spectroscopy of tissue light scattering and its anisotropy. Here we present, for the first time, a plum pudding random medium (PPRM) model for biological tissue which succinctly describes tissue as a superposition of distinctive scattering structures (plum) embedded inside a fractal continuous medium of background refractive index fluctuation (pudding). PPRM faithfully reproduces the wavelength dependence of tissue light scattering and attributes the “anomalous” trend in the anisotropy to the plum and the powerlaw dependence of the reduced scattering coefficient to the fractal scattering pudding. Most importantly, PPRM opens up a novel venue of quantifying the tissue architecture and microscopic structures on average from macroscopic probing of the bulk with scattered light alone without tissue excision. We demonstrate this potential by visualizing the fine microscopic structural alterations in breast tissue (adipose, glandular, fibrocystic, fibroadenoma, and ductal carcinoma) deduced from noncontact spectroscopic measurement. PMID:28663913

  5. Electro-Osmosis and Water Uptake in Polymer Electrolytes in Equilibrium with Water Vapor at Low Temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallagher, K. G.; Pivovar, B. S.; Fuller, T. F.

    2009-01-01

    Water uptake and electro-osmosis are investigated to improve the understanding and aid the modeling of water transport in proton-exchange membrane fuel cells (PEMFCs) below 0 C. Measurements of water sorption isotherms show a significant reduction in the water capacity of polymer electrolytes below 0 C. This reduced water content is attributed to the lower vapor pressure of ice compared to supercooled liquid water. At -25 C, 1100 equivalent weight Nafion in equilibrium with vapor over ice has 8 moles of water per sulfonic acid group. Measurements of the electro-osmotic drag coefficient for Nafion and both random and multiblock copolymer sulfonatedmore » poly(arylene ether sulfone) (BPSH) chemistries are reported for vapor equilibrated samples below 0 C. The electro-osmotic drag coefficient of BPSH chemistries is found to be {approx}0.4, and that of Nafion is {approx}1. No significant temperature effect on the drag coefficient is found. The implication of an electro-osmotic drag coefficient less than unity is discussed in terms of proton conduction mechanisms. Simulations of the ohmically limited current below 0 C show that a reduced water uptake below 0 C results in a significant decrease in PEMFC performance.« less

  6. Polynomials with Restricted Coefficients and Their Applications

    DTIC Science & Technology

    1987-01-01

    sums of exponentials of quadratics, he reduced such ýzums to exponentials of linears (geometric sums!) by simplg multiplying by their conjugates...n, the same algebraic manipulations as before lead to rn V`-~ v ie ? --8-- el4V’ .fk ts with 𔄃 = a+(2r+l)t, A = a+(2r+2m+l)t. To estimate the right...coefficients. These random polynomials represent the deviation in frequency response of a linear , equispaced antenna array cauised by coefficient

  7. Local random configuration-tree theory for string repetition and facilitated dynamics of glass

    NASA Astrophysics Data System (ADS)

    Lam, Chi-Hang

    2018-02-01

    We derive a microscopic theory of glassy dynamics based on the transport of voids by micro-string motions, each of which involves particles arranged in a line hopping simultaneously displacing one another. Disorder is modeled by a random energy landscape quenched in the configuration space of distinguishable particles, but transient in the physical space as expected for glassy fluids. We study the evolution of local regions with m coupled voids. At a low temperature, energetically accessible local particle configurations can be organized into a random tree with nodes and edges denoting configurations and micro-string propagations respectively. Such trees defined in the configuration space naturally describe systems defined in two- or three-dimensional physical space. A micro-string propagation initiated by a void can facilitate similar motions by other voids via perturbing the random energy landscape, realizing path interactions between voids or equivalently string interactions. We obtain explicit expressions of the particle diffusion coefficient and a particle return probability. Under our approximation, as temperature decreases, random trees of energetically accessible configurations exhibit a sequence of percolation transitions in the configuration space, with local regions containing fewer coupled voids entering the non-percolating immobile phase first. Dynamics is dominated by coupled voids of an optimal group size, which increases as temperature decreases. Comparison with a distinguishable-particle lattice model (DPLM) of glass shows very good quantitative agreements using only two adjustable parameters related to typical energy fluctuations and the interaction range of the micro-strings.

  8. Impact of observational incompleteness on the structural properties of protein interaction networks

    NASA Astrophysics Data System (ADS)

    Kuhnt, Mathias; Glauche, Ingmar; Greiner, Martin

    2007-01-01

    The observed structure of protein interaction networks is corrupted by many false positive/negative links. This observational incompleteness is abstracted as random link removal and a specific, experimentally motivated (spoke) link rearrangement. Their impact on the structural properties of gene-duplication-and-mutation network models is studied. For the degree distribution a curve collapse is found, showing no sensitive dependence on the link removal/rearrangement strengths and disallowing a quantitative extraction of model parameters. The spoke link rearrangement process moves other structural observables, like degree correlations, cluster coefficient and motif frequencies, closer to their counterparts extracted from the yeast data. This underlines the importance to take a precise modeling of the observational incompleteness into account when network structure models are to be quantitatively compared to data.

  9. Extraction of benzene and cyclohexane using [BMIM][N(CN)2] and their equilibrium modeling

    NASA Astrophysics Data System (ADS)

    Ismail, Marhaina; Bustam, M. Azmi; Man, Zakaria

    2017-12-01

    The separation of aromatic compound from aliphatic mixture is one of the essential industrial processes for an economically green process. In order to determine the separation efficiency of ionic liquid (IL) as a solvent in the separation, the ternary diagram of liquid-liquid extraction (LLE) 1-butyl-3-methylimidazolium dicyanamide [BMIM][N(CN)2] with benzene and cyclohexane was studied at T=298.15 K and atmospheric pressure. The solute distribution coefficient and solvent selectivity derived from the equilibrium data were used to evaluate if the selected ionic liquid can be considered as potential solvent for the separation of benzene from cyclohexane. The experimental tie line data was correlated using non-random two liquid model (NRTL) and Margules model. It was found that the solute distribution coefficient is (0.4430-0.0776) and selectivity of [BMIM][N(CN)2] for benzene is (53.6-13.9). The ternary diagram showed that the selected IL can perform the separation of benzene and cyclohexane as it has extractive capacity and selectivity. Therefore, [BMIM][N(CN)2] can be considered as a potential extracting solvent for the LLE of benzene and cyclohexane.

  10. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL

    PubMed Central

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-01-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities. PMID:24086091

  11. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL.

    PubMed

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-06-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities.

  12. Stochastic modeling of phosphorus transport in the Three Gorges Reservoir by incorporating variability associated with the phosphorus partition coefficient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Fang, Hongwei; Xu, Xingya

    Phosphorus (P) fate and transport plays a crucial role in the ecology of rivers and reservoirs in which eutrophication is limited by P. A key uncertainty in models used to help manage P in such systems is the partitioning of P to suspended and bed sediments. By analyzing data from field and laboratory experiments, we stochastically characterize the variability of the partition coefficient (Kd) and derive spatio-temporal solutions for P transport in the Three Gorges Reservoir (TGR). We formulate a set of stochastic partial different equations (SPDEs) to simulate P transport by randomly sampling Kd from the measured distributions, tomore » obtain statistical descriptions of the P concentration and retention in the TGR. The correspondence between predicted and observed P concentrations and P retention in the TGR combined with the ability to effectively characterize uncertainty suggests that a model that incorporates the observed variability can better describe P dynamics and more effectively serve as a tool for P management in the system. This study highlights the importance of considering parametric uncertainty in estimating uncertainty/variability associated with simulated P transport.« less

  13. Stochastic modeling of phosphorus transport in the Three Gorges Reservoir by incorporating variability associated with the phosphorus partition coefficient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Fang, Hongwei; Xu, Xingya

    Phosphorus (P) fate and transport plays a crucial role in the ecology of rivers and reservoirs in which eutrophication is limited by P. A key uncertainty in models used to help manage P in such systems is the partitioning of P to suspended and bed sediments. By analyzing data from field and laboratory experiments, we stochastically characterize the variability of the partition coefficient (Kd) and derive spatio-temporal solutions for P transport in the Three Gorges Reservoir (TGR). Here, we formulate a set of stochastic partial different equations (SPDEs) to simulate P transport by randomly sampling Kd from the measured distributions,more » to obtain statistical descriptions of the P concentration and retention in the TGR. Furthermore, the correspondence between predicted and observed P concentrations and P retention in the TGR combined with the ability to effectively characterize uncertainty suggests that a model that incorporates the observed variability can better describe P dynamics and more effectively serve as a tool for P management in the system. Our study highlights the importance of considering parametric uncertainty in estimating uncertainty/variability associated with simulated P transport.« less

  14. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  15. Stochastic modeling of phosphorus transport in the Three Gorges Reservoir by incorporating variability associated with the phosphorus partition coefficient

    DOE PAGES

    Huang, Lei; Fang, Hongwei; Xu, Xingya; ...

    2017-08-01

    Phosphorus (P) fate and transport plays a crucial role in the ecology of rivers and reservoirs in which eutrophication is limited by P. A key uncertainty in models used to help manage P in such systems is the partitioning of P to suspended and bed sediments. By analyzing data from field and laboratory experiments, we stochastically characterize the variability of the partition coefficient (Kd) and derive spatio-temporal solutions for P transport in the Three Gorges Reservoir (TGR). Here, we formulate a set of stochastic partial different equations (SPDEs) to simulate P transport by randomly sampling Kd from the measured distributions,more » to obtain statistical descriptions of the P concentration and retention in the TGR. Furthermore, the correspondence between predicted and observed P concentrations and P retention in the TGR combined with the ability to effectively characterize uncertainty suggests that a model that incorporates the observed variability can better describe P dynamics and more effectively serve as a tool for P management in the system. Our study highlights the importance of considering parametric uncertainty in estimating uncertainty/variability associated with simulated P transport.« less

  16. Response to a periodic stimulus in a perfect integrate-and-fire neuron model driven by colored noise.

    PubMed

    Mankin, Romi; Rekker, Astrid

    2016-12-01

    The output interspike interval statistics of a stochastic perfect integrate-and-fire neuron model driven by an additive exogenous periodic stimulus is considered. The effect of temporally correlated random activity of synaptic inputs is modeled by an additive symmetric dichotomous noise. Using a first-passage-time formulation, exact expressions for the output interspike interval density and for the serial correlation coefficient are derived in the nonsteady regime, and their dependence on input parameters (e.g., the noise correlation time and amplitude as well as the frequency of an input current) is analyzed. It is shown that an interplay of a periodic forcing and colored noise can cause a variety of nonequilibrium cooperation effects, such as sign reversals of the interspike interval correlations versus noise-switching rate as well as versus the frequency of periodic forcing, a power-law-like decay of oscillations of the serial correlation coefficients in the long-lag limit, amplification of the output signal modulation in the instantaneous firing rate of the neural response, etc. The features of spike statistics in the limits of slow and fast noises are also discussed.

  17. Response to a periodic stimulus in a perfect integrate-and-fire neuron model driven by colored noise

    NASA Astrophysics Data System (ADS)

    Mankin, Romi; Rekker, Astrid

    2016-12-01

    The output interspike interval statistics of a stochastic perfect integrate-and-fire neuron model driven by an additive exogenous periodic stimulus is considered. The effect of temporally correlated random activity of synaptic inputs is modeled by an additive symmetric dichotomous noise. Using a first-passage-time formulation, exact expressions for the output interspike interval density and for the serial correlation coefficient are derived in the nonsteady regime, and their dependence on input parameters (e.g., the noise correlation time and amplitude as well as the frequency of an input current) is analyzed. It is shown that an interplay of a periodic forcing and colored noise can cause a variety of nonequilibrium cooperation effects, such as sign reversals of the interspike interval correlations versus noise-switching rate as well as versus the frequency of periodic forcing, a power-law-like decay of oscillations of the serial correlation coefficients in the long-lag limit, amplification of the output signal modulation in the instantaneous firing rate of the neural response, etc. The features of spike statistics in the limits of slow and fast noises are also discussed.

  18. A Parameterized Inversion Model for Soil Moisture and Biomass from Polarimetric Backscattering Coefficients

    NASA Technical Reports Server (NTRS)

    Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak

    2012-01-01

    A semi-empirical algorithm for the retrieval of soil moisture, root mean square (RMS) height and biomass from polarimetric SAR data is explained and analyzed in this paper. The algorithm is a simplification of the distorted Born model. It takes into account the physical scattering phenomenon and has three major components: volume, double-bounce and surface. This simplified model uses the three backscattering coefficients ( sigma HH, sigma HV and sigma vv) at low-frequency (P-band). The inversion process uses the Levenberg-Marquardt non-linear least-squares method to estimate the structural parameters. The estimation process is entirely explained in this paper, from initialization of the unknowns to retrievals. A sensitivity analysis is also done where the initial values in the inversion process are varying randomly. The results show that the inversion process is not really sensitive to initial values and a major part of the retrievals has a root-mean-square error lower than 5% for soil moisture, 24 Mg/ha for biomass and 0.49 cm for roughness, considering a soil moisture of 40%, roughness equal to 3cm and biomass varying from 0 to 500 Mg/ha with a mean of 161 Mg/ha

  19. Tuning the overlap and the cross-layer correlations in two-layer networks: Application to a susceptible-infectious-recovered model with awareness dissemination

    NASA Astrophysics Data System (ADS)

    Juher, David; Saldaña, Joan

    2018-03-01

    We study the properties of the potential overlap between two networks A ,B sharing the same set of N nodes (a two-layer network) whose respective degree distributions pA(k ) ,pB(k ) are given. Defining the overlap coefficient α as the Jaccard index, we prove that α is very close to 0 when A and B are random and independently generated. We derive an upper bound αM for the maximum overlap coefficient permitted in terms of pA(k ) , pB(k ) , and N . Then we present an algorithm based on cross rewiring of links to obtain a two-layer network with any prescribed α inside the range (0 ,αM) . A refined version of the algorithm allows us to minimize the cross-layer correlations that unavoidably appear for values of α beyond a critical overlap αc<αM . Finally, we present a very simple example of a susceptible-infectious-recovered epidemic model with information dissemination and use the algorithms to determine the impact of the overlap on the final outbreak size predicted by the model.

  20. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  1. Long-range correlations and charge transport properties of DNA sequences

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-liang; Ren, Yi; Xie, Qiong-tao; Deng, Chao-sheng; Xu, Hui

    2010-04-01

    By using Hurst's analysis and transfer approach, the rescaled range functions and Hurst exponents of human chromosome 22 and enterobacteria phage lambda DNA sequences are investigated and the transmission coefficients, Landauer resistances and Lyapunov coefficients of finite segments based on above genomic DNA sequences are calculated. In a comparison with quasiperiodic and random artificial DNA sequences, we find that λ-DNA exhibits anticorrelation behavior characterized by a Hurst exponent 0.5

  2. A novel attack method about double-random-phase-encoding-based image hiding method

    NASA Astrophysics Data System (ADS)

    Xu, Hongsheng; Xiao, Zhijun; Zhu, Xianchen

    2018-03-01

    By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2-dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.

  3. Unimodular lattice triangulations as small-world and scale-free random graphs

    NASA Astrophysics Data System (ADS)

    Krüger, B.; Schmidt, E. M.; Mecke, K.

    2015-02-01

    Real-world networks, e.g., the social relations or world-wide-web graphs, exhibit both small-world and scale-free behaviour. We interpret lattice triangulations as planar graphs by identifying triangulation vertices with graph nodes and one-dimensional simplices with edges. Since these triangulations are ergodic with respect to a certain Pachner flip, applying different Monte Carlo simulations enables us to calculate average properties of random triangulations, as well as canonical ensemble averages, using an energy functional that is approximately the variance of the degree distribution. All considered triangulations have clustering coefficients comparable with real-world graphs; for the canonical ensemble there are inverse temperatures with small shortest path length independent of system size. Tuning the inverse temperature to a quasi-critical value leads to an indication of scale-free behaviour for degrees k≥slant 5. Using triangulations as a random graph model can improve the understanding of real-world networks, especially if the actual distance of the embedded nodes becomes important.

  4. Fault Detection of Aircraft System with Random Forest Algorithm and Similarity Measure

    PubMed Central

    Park, Wookje; Jung, Sikhang

    2014-01-01

    Research on fault detection algorithm was developed with the similarity measure and random forest algorithm. The organized algorithm was applied to unmanned aircraft vehicle (UAV) that was readied by us. Similarity measure was designed by the help of distance information, and its usefulness was also verified by proof. Fault decision was carried out by calculation of weighted similarity measure. Twelve available coefficients among healthy and faulty status data group were used to determine the decision. Similarity measure weighting was done and obtained through random forest algorithm (RFA); RF provides data priority. In order to get a fast response of decision, a limited number of coefficients was also considered. Relation of detection rate and amount of feature data were analyzed and illustrated. By repeated trial of similarity calculation, useful data amount was obtained. PMID:25057508

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huan; Baker, Nathan A.; Li, Xiantao

    We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.

  6. [Application of wavelet transform-radial basis function neural network in NIRS for determination of rifampicin and isoniazide tablets].

    PubMed

    Lu, Jia-hui; Zhang, Yi-bo; Zhang, Zhuo-yong; Meng, Qing-fan; Guo, Wei-liang; Teng, Li-rong

    2008-06-01

    A calibration model (WT-RBFNN) combination of wavelet transform (WT) and radial basis function neural network (RBFNN) was proposed for synchronous and rapid determination of rifampicin and isoniazide in Rifampicin and Isoniazide tablets by near infrared reflectance spectroscopy (NIRS). The approximation coefficients were used for input data in RBFNN. The network parameters including the number of hidden layer neurons and spread constant (SC) were investigated. WT-RBFNN model which compressed the original spectra data, removed the noise and the interference of background, and reduced the randomness, the capabilities of prediction were well optimized. The root mean square errors of prediction (RMSEP) for the determination of rifampicin and isoniazide obtained from the optimum WT-RBFNN model are 0.00639 and 0.00587, and the root mean square errors of cross-calibration (RMSECV) for them are 0.00604 and 0.00457, respectively which are superior to those obtained by the optimum RBFNN and PLS models. Regression coefficient (R) between NIRS predicted values and RP-HPLC values for rifampicin and isoniazide are 0.99522 and 0.99392, respectively and the relative error is lower than 2.300%. It was verified that WT-RBFNN model is a suitable approach to dealing with NIRS. The proposed WT-RBFNN model is convenient, and rapid and with no pollution for the determination of rifampicin and isoniazide tablets.

  7. Homogenization Issues in the Combustion of Heterogeneous Solid Propellants

    NASA Technical Reports Server (NTRS)

    Chen, M.; Buckmaster, J.; Jackson, T. L.; Massa, L.

    2002-01-01

    We examine random packs of discs or spheres, models for ammonium-perchlorate-in-binder propellants, and discuss their average properties. An analytical strategy is described for calculating the mean or effective heat conduction coefficient in terms of the heat conduction coefficients of the individual components, and the results are verified by comparison with those of direct numerical simulations (dns) for both 2-D (disc) and 3-D (sphere) packs across which a temperature difference is applied. Similarly, when the surface regression speed of each component is related to the surface temperature via a simple Arrhenius law, an analytical strategy is developed for calculating an effective Arrhenius law for the combination, and these results are verified using dns in which a uniform heat flux is applied to the pack surface, causing it to regress. These results are needed for homogenization strategies necessary for fully integrated 2-D or 3-D simulations of heterogeneous propellant combustion.

  8. Multilevel Dynamic Generalized Structured Component Analysis for Brain Connectivity Analysis in Functional Neuroimaging Data.

    PubMed

    Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S

    2016-06-01

    We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.

  9. Microwave inversion of leaf area and inclination angle distributions from backscattered data

    NASA Technical Reports Server (NTRS)

    Lang, R. H.; Saleh, H. A.

    1985-01-01

    The backscattering coefficient from a slab of thin randomly oriented dielectric disks over a flat lossy ground is used to reconstruct the inclination angle and area distributions of the disks. The disks are employed to model a leafy agricultural crop, such as soybeans, in the L-band microwave region of the spectrum. The distorted Born approximation, along with a thin disk approximation, is used to obtain a relationship between the horizontal-like polarized backscattering coefficient and the joint probability density of disk inclination angle and disk radius. Assuming large skin depth reduces the relationship to a linear Fredholm integral equation of the first kind. Due to the ill-posed nature of this equation, a Phillips-Twomey regularization method with a second difference smoothing condition is used to find the inversion. Results are obtained in the presence of 1 and 10 percent noise for both leaf inclination angle and leaf radius densities.

  10. Diffusion of water in the endosperm tissue of wheat grains as studied by pulsed field gradient nuclear magnetic resonance.

    PubMed

    Callaghan, P T; Jolley, K W; Lelievre, J

    1979-10-01

    Pulsed field gradient nuclear magnetic resonance has been used to measure water self-diffusion coefficients in the endosperm tissue of wheat grains as a function of the tissue water content. A model that confines the water molecules to a randomly oriented array of capillaries with both transverse dimension less than 100 nm has been used to fit the data and give a unique diffusion coefficient at each water content. The diffusion rates vary from 1.8 x 10(-10) m2s-1 at the lowest to 1.2 x 10(-9) m2s-1 at the highest moisture content. This variation can be explained in terms of an increase in water film thickness from approximately 0.5 to approximately 2.5 nm over the moisture range investigated (200-360 mg g-1).

  11. Equity venture capital platform model based on complex network

    NASA Astrophysics Data System (ADS)

    Guo, Dongwei; Zhang, Lanshu; Liu, Miao

    2018-05-01

    This paper uses the small-world network and the random-network to simulate the relationship among the investors, construct the network model of the equity venture capital platform to explore the impact of the fraud rate and the bankruptcy rate on the robustness of the network model while observing the impact of the average path length and the average agglomeration coefficient of the investor relationship network on the income of the network model. The study found that the fraud rate and bankruptcy rate exceeded a certain threshold will lead to network collapse; The bankruptcy rate has a great influence on the income of the platform; The risk premium exists, and the average return is better under a certain range of bankruptcy risk; The structure of the investor relationship network has no effect on the income of the investment model.

  12. Pressure broadening and pressure shift of diatomic iodine at 675 nm

    NASA Astrophysics Data System (ADS)

    Wolf, Erich N.

    Doppler-limited, steady-state, linear absorption spectra of 127 I2 (diatomic iodine) near 675 nm were recorded with an internally-referenced wavelength modulation spectrometer, built around a free-running diode laser using phase-sensitive detection, and capable of exceeding the signal-to-noise limit imposed by the 12-bit data acquisition system. Observed I2 lines were accounted for by published spectroscopic constants. Pressure broadening and pressure shift coefficients were determined respectively from the line-widths and line-center shifts as a function of buffer gas pressure, which were determined from nonlinear regression analysis of observed line shapes against a Gaussian-Lorentzian convolution line shape model. This model included a linear superposition of the I2 hyperfine structure based on changes in the nuclear electric quadrupole coupling constant. Room temperature (292 K) values of these coefficients were determined for six unblended I 2 lines in the region 14,817.95 to 14,819.45 cm-1 for each of the following buffer gases: the atoms He, Ne, Ar, Kr, and Xe; and the molecules H2, D2, N2, CO2, N2O, air, and H2O. These coefficients were also determined at one additional temperature (388 K) for He and CO2, and at two additional temperatures (348 and 388 K) for Ar. Elastic collision cross-sections were determined for all pressure broadening coefficients in this region. Room temperature values of these coefficients were also determined for several low-J I2 lines in the region 14,946.17 to 14,850.29 cm-1 for Ar. A line shape model, obtained from a first-order perturbation solution of the time-dependent Schrodinger equation for randomly occurring interactions between a two-level system and a buffer gas treated as step-function potentials, reveals a relationship between the ratio of pressure broadening to pressure shift coefficients and a change in the wave function phase-factor, interpreted as reflecting the "cause and effect" of state-changing events in the microscopic domain. Collision cross-sections determined from this model are interpreted as reflecting the inelastic nature of collision-induced state-changing events. A steady-state kinetic model for the two-level system compatible with the Beer-Lambert law reveals thermodynamic constraints on the ensemble-average state-changing rates and collision cross-sections, and leads to the proposal of a relationship between observed asymmetric line shapes and irreversibility in the microscopic domain.

  13. [Natural selection associated with color vision defects in some population groups of Eurasia].

    PubMed

    Evsiukov, A N

    2014-01-01

    Fitness coefficients and other quantitative parameters of selection associated with the generalized color blindness gene CB+ were obtained for three ethnogeographic population groups, including Belarusians from Belarus, ethnic populations of the Volga-Ural region, and ethnic populations of Siberia and the Far East of Russia. All abnormalities encoded by the OPN1LW and OPN1MW loci were treated as deviations from normal color perception. Coefficients were estimated from an approximation of the observed CB+ frequency distributions to the theoretical stationary distribution for the Wright island model. This model takes into account the pressure of migrations, selection, and random genetic drift, while the selection parameters are represented in the form of the distribution parameters. In the populations of Siberia and Far East, directional selection in favor of normal color vision and the corresponding allele CB- was observed. In the Belarusian and ethnic populations of the Volga-Ural region, stabilizing selection was observed. The selection intensity constituted 0.03 in the Belarusian; 0.22 in the ethnic populations of the Volga-Ural region; and 0.24 in ethnic populations of Siberia and Far East.

  14. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    NASA Astrophysics Data System (ADS)

    Stubberud, Peter A.; Stubberud, Stephen C.; Stubberud, Allen R.

    2014-12-01

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT systems.

  15. Designing and evaluating health systems level hypertension control interventions for African-Americans: lessons from a pooled analysis of three cluster randomized trials.

    PubMed

    Pavlik, Valory N; Chan, Wenyaw; Hyman, David J; Feldman, Penny; Ogedegbe, Gbenga; Schwartz, Joseph E; McDonald, Margaret; Einhorn, Paula; Tobin, Jonathan N

    2015-01-01

    African-Americans (AAs) have a high prevalence of hypertension and their blood pressure (BP) control on treatment still lags behind other groups. In 2004, NHLBI funded five projects that aimed to evaluate clinically feasible interventions to effect changes in medical care delivery leading to an increased proportion of AA patients with controlled BP. Three of the groups performed a pooled analysis of trial results to determine: 1) the magnitude of the combined intervention effect; and 2) how the pooled results could inform the methodology for future health-system level BP interventions. Using a cluster randomized design, the trials enrolled AAs with uncontrolled hypertension to test interventions targeting a combination of patient and clinician behaviors. The 12-month Systolic BP (SBP) and Diastolic BP (DBP) effects of intervention or control cluster assignment were assessed using mixed effects longitudinal regression modeling. 2,015 patients representing 352 clusters participated across the three trials. Pooled BP slopes followed a quadratic pattern, with an initial decline, followed by a rise toward baseline, and did not differ significantly between intervention and control clusters: SBP linear coefficient = -2.60±0.21 mmHg per month, p<0.001; quadratic coefficient = 0.167± 0.02 mmHg/month, p<0.001; group by time interaction group by time group x linear time coefficient=0.145 ± 0.293, p=0.622; group x quadratic time coefficient= -0.017 ± 0.026, p=0.525). RESULTS were similar for DBP. The individual sites did not have significant intervention effects when analyzed separately. Investigators planning behavioral trials to improve BP control in health systems serving AAs should plan for small effect sizes and employ a "run-in" period in which BP can be expected to improve in both experimental and control clusters.

  16. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stubberud, Peter A., E-mail: stubber@ee.unlv.edu; Stubberud, Stephen C., E-mail: scstubberud@ieee.org; Stubberud, Allen R., E-mail: stubberud@att.net

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This canmore » be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT systems.« less

  17. Capturing the Large Scale Behavior of Many Particle Systems Through Coarse-Graining

    NASA Astrophysics Data System (ADS)

    Punshon-Smith, Samuel

    This dissertation is concerned with two areas of investigation: the first is understanding the mathematical structures behind the emergence of macroscopic laws and the effects of small scales fluctuations, the second involves the rigorous mathematical study of such laws and related questions of well-posedness. To address these areas of investigation the dissertation involves two parts: Part I concerns the theory of coarse-graining of many particle systems. We first investigate the mathematical structure behind the Mori-Zwanzig (projection operator) formalism by introducing two perturbative approaches to coarse-graining of systems that have an explicit scale separation. One concerns systems with little dissipation, while the other concerns systems with strong dissipation. In both settings we obtain an asymptotic series of `corrections' to the limiting description which are small with respect to the scaling parameter, these corrections represent the effects of small scales. We determine that only certain approximations give rise to dissipative effects in the resulting evolution. Next we apply this framework to the problem of coarse-graining the locally conserved quantities of a classical Hamiltonian system. By lumping conserved quantities into a collection of mesoscopic cells, we obtain, through a series of approximations, a stochastic particle system that resembles a discretization of the non-linear equations of fluctuating hydrodynamics. We study this system in the case that the transport coefficients are constant and prove well-posedness of the stochastic dynamics. Part II concerns the mathematical description of models where the underlying characteristics are stochastic. Such equations can model, for instance, the dynamics of a passive scalar in a random (turbulent) velocity field or the statistical behavior of a collection of particles subject to random environmental forces. First, we study general well-posedness properties of stochastic transport equation with rough diffusion coefficients. Our main result is strong existence and uniqueness under certain regularity conditions on the coefficients, and uses the theory of renormalized solutions of transport equations adapted to the stochastic setting. Next, in a work undertaken with collaborator Scott-Smith we study the Boltzmann equation with a stochastic forcing. The noise describing the forcing is white in time and colored in space and describes the effects of random environmental forces on a rarefied gas undergoing instantaneous, binary collisions. Under a cut-off assumption on the collision kernel and a coloring hypothesis for the noise coefficients, we prove the global existence of renormalized (DiPerna/Lions) martingale solutions to the Boltzmann equation for large initial data with finite mass, energy, and entropy. Our analysis includes a detailed study of weak martingale solutions to a class of linear stochastic kinetic equations. Tightness of the appropriate quantities is proved by an extension of the Skorohod theorem to non-metric spaces.

  18. Electrohydrodynamic properties of succinoglycan as probed by fluorescence correlation spectroscopy, potentiometric titration and capillary electrophoresis.

    PubMed

    Duval, Jérôme F L; Slaveykova, Vera I; Hosse, Monika; Buffle, Jacques; Wilkinson, Kevin J

    2006-10-01

    The electrostatic, hydrodynamic and conformational properties of aqueous solutions of succinoglycan have been analyzed by fluorescence correlation spectroscopy (FCS), proton titration, and capillary electrophoresis (CE) over a large range of pH values and electrolyte (NaCl) concentrations. Using the theoretical formalism developed previously for the electrokinetic properties of soft, permeable particles, a quantitative analysis for the electro-hydrodynamics of succinoglycan is performed by taking into account, in a self-consistent manner, the measured values of the diffusion coefficients, electric charge densities, and electrophoretic mobilities. For that purpose, two limiting conformations for the polysaccharide in solution are tested, i.e. succinoglycan behaves as (i) a spherical, random coil polymer or (ii) a rodlike particle with charged lateral chains. The results show that satisfactory modeling of the titration data for ionic strengths larger than 50 mM can be accomplished using both geometries over the entire range of pH values. Electrophoretic mobilities measured for sufficiently large pH values (pH > 5-6) are in line with predictions based on either model. The best manner to discriminate between these two conceptual models is briefly discussed. For low pH values (pH < 5), both models indicate aggregation, resulting in an increase of the hydrodynamic permeability and a decrease of the diffusion coefficient.

  19. Texture-preserved penalized weighted least-squares reconstruction of low-dose CT image via image segmentation and high-order MRF modeling

    NASA Astrophysics Data System (ADS)

    Han, Hao; Zhang, Hao; Wei, Xinzhou; Moore, William; Liang, Zhengrong

    2016-03-01

    In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis. We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.

  20. Modeling and Prediction of Solvent Effect on Human Skin Permeability using Support Vector Regression and Random Forest.

    PubMed

    Baba, Hiromi; Takahara, Jun-ichi; Yamashita, Fumiyoshi; Hashida, Mitsuru

    2015-11-01

    The solvent effect on skin permeability is important for assessing the effectiveness and toxicological risk of new dermatological formulations in pharmaceuticals and cosmetics development. The solvent effect occurs by diverse mechanisms, which could be elucidated by efficient and reliable prediction models. However, such prediction models have been hampered by the small variety of permeants and mixture components archived in databases and by low predictive performance. Here, we propose a solution to both problems. We first compiled a novel large database of 412 samples from 261 structurally diverse permeants and 31 solvents reported in the literature. The data were carefully screened to ensure their collection under consistent experimental conditions. To construct a high-performance predictive model, we then applied support vector regression (SVR) and random forest (RF) with greedy stepwise descriptor selection to our database. The models were internally and externally validated. The SVR achieved higher performance statistics than RF. The (externally validated) determination coefficient, root mean square error, and mean absolute error of SVR were 0.899, 0.351, and 0.268, respectively. Moreover, because all descriptors are fully computational, our method can predict as-yet unsynthesized compounds. Our high-performance prediction model offers an attractive alternative to permeability experiments for pharmaceutical and cosmetic candidate screening and optimizing skin-permeable topical formulations.

  1. Exploring QSARs of the interaction of flavonoids with GABA (A) receptor using MLR, ANN and SVM techniques.

    PubMed

    Deeb, Omar; Shaik, Basheerulla; Agrawal, Vijay K

    2014-10-01

    Quantitative Structure-Activity Relationship (QSAR) models for binding affinity constants (log Ki) of 78 flavonoid ligands towards the benzodiazepine site of GABA (A) receptor complex were calculated using the machine learning methods: artificial neural network (ANN) and support vector machine (SVM) techniques. The models obtained were compared with those obtained using multiple linear regression (MLR) analysis. The descriptor selection and model building were performed with 10-fold cross-validation using the training data set. The SVM and MLR coefficient of determination values are 0.944 and 0.879, respectively, for the training set and are higher than those of ANN models. Though the SVM model shows improvement of training set fitting, the ANN model was superior to SVM and MLR in predicting the test set. Randomization test is employed to check the suitability of the models.

  2. Effects of intermode nonlinearity and intramode nonlinearity on modulation instability in randomly birefringent two-mode optical fibers

    NASA Astrophysics Data System (ADS)

    Li, Jin Hua; Xu, Hui; Sun, Ting Ting; Pei, Shi Xin; Ren, Hai Dong

    2018-05-01

    We analyze in detail the effects of the intermode nonlinearity (IEMN) and intramode nonlinearity (IRMN) on modulation instability (MI) in randomly birefringent two-mode optical fibers (RB-TMFs). In the anomalous dispersion regime, the MI gain enhances significantly as the IEMN and IRMN coefficients increases. In the normal dispersion regime, MI can be generated without the differential mode group delay (DMGD) effect, as long as the IEMN coefficient between two distinct modes is above a critical value, or the IRMN coefficient inside a mode is below a critical value. This critical IEMN (IRMN) coefficient depends strongly on the given IRMN (IEMN) coefficient and DMGD for a given nonlinear RB-TMF structure, and is independent on the input total power, the power ratio distribution and the group velocity dispersion (GVD) ratio between the two modes. On the other hand, in contrast to the MI band arising from the pure effect of DMGD in the normal dispersion regime, where MI vanishes after a critical total power, the generated MI band under the combined effects of IEMN and IRMN without DMGD exists for any total power and enhances with the total power. The MI analysis is verified numerically by launching perturbed continuous waves (CWs) with wave propagation method.

  3. Note: On the relation between Lifson-Jackson and Derrida formulas for effective diffusion coefficient

    NASA Astrophysics Data System (ADS)

    Kalnin, Juris R.; Berezhkovskii, Alexander M.

    2013-11-01

    The Lifson-Jackson formula provides the effective free diffusion coefficient for a particle diffusing in an arbitrary one-dimensional periodic potential. Its counterpart, when the underlying dynamics is described in terms of an unbiased nearest-neighbor Markovian random walk on a one-dimensional periodic lattice is given by the formula obtained by Derrida. It is shown that the latter formula can be considered as a discretized version of the Lifson-Jackson formula with correctly chosen position-dependent diffusion coefficient.

  4. Doppler Temperature Coefficient Calculations Using Adjoint-Weighted Tallies and Continuous Energy Cross Sections in MCNP6

    NASA Astrophysics Data System (ADS)

    Gonzales, Matthew Alejandro

    The calculation of the thermal neutron Doppler temperature reactivity feedback co-efficient, a key parameter in the design and safe operation of advanced reactors, using first order perturbation theory in continuous energy Monte Carlo codes is challenging as the continuous energy adjoint flux is not readily available. Traditional approaches of obtaining the adjoint flux attempt to invert the random walk process as well as require data corresponding to all temperatures and their respective temperature derivatives within the system in order to accurately calculate the Doppler temperature feedback. A new method has been developed using adjoint-weighted tallies and On-The-Fly (OTF) generated continuous energy cross sections within the Monte Carlo N-Particle (MCNP6) transport code. The adjoint-weighted tallies are generated during the continuous energy k-eigenvalue Monte Carlo calculation. The weighting is based upon the iterated fission probability interpretation of the adjoint flux, which is the steady state population in a critical nuclear reactor caused by a neutron introduced at that point in phase space. The adjoint-weighted tallies are produced in a forward calculation and do not require an inversion of the random walk. The OTF cross section database uses a high order functional expansion between points on a user-defined energy-temperature mesh in which the coefficients with respect to a polynomial fitting in temperature are stored. The coefficients of the fits are generated before run- time and called upon during the simulation to produce cross sections at any given energy and temperature. The polynomial form of the OTF cross sections allows the possibility of obtaining temperature derivatives of the cross sections on-the-fly. The use of Monte Carlo sampling of adjoint-weighted tallies and the capability of computing derivatives of continuous energy cross sections with respect to temperature are used to calculate the Doppler temperature coefficient in a research version of MCNP6. Temperature feedback results from the cross sections themselves, changes in the probability density functions, as well as changes in the density of the materials. The focus of this work is specific to the Doppler temperature feedback which result from Doppler broadening of cross sections as well as changes in the probability density function within the scattering kernel. This method is compared against published results using Mosteller's numerical benchmark to show accurate evaluations of the Doppler temperature coefficient, fuel assembly calculations, and a benchmark solution based on the heavy gas model for free-gas elastic scattering. An infinite medium benchmark for neutron free gas elastic scattering for large scattering ratios and constant absorption cross section has been developed using the heavy gas model. An exact closed form solution for the neutron energy spectrum is obtained in terms of the confluent hypergeometric function and compared against spectra for the free gas scattering model in MCNP6. Results show a quick increase in convergence of the analytic energy spectrum to the MCNP6 code with increasing target size, showing absolute relative differences of less than 5% for neutrons scattering with carbon. The analytic solution has been generalized to accommodate piecewise constant in energy absorption cross section to produce temperature feedback. Results reinforce the constraints in which heavy gas theory may be applied resulting in a significant target size to accommodate increasing cross section structure. The energy dependent piecewise constant cross section heavy gas model was used to produce a benchmark calculation of the Doppler temperature coefficient to show accurate calculations when using the adjoint-weighted method. Results show the Doppler temperature coefficient using adjoint weighting and cross section derivatives accurately obtains the correct solution within statistics as well as reduce computer runtimes by a factor of 50.

  5. Observed intra-cluster correlation coefficients in a cluster survey sample of patient encounters in general practice in Australia

    PubMed Central

    Knox, Stephanie A; Chondros, Patty

    2004-01-01

    Background Cluster sample study designs are cost effective, however cluster samples violate the simple random sample assumption of independence of observations. Failure to account for the intra-cluster correlation of observations when sampling through clusters may lead to an under-powered study. Researchers therefore need estimates of intra-cluster correlation for a range of outcomes to calculate sample size. We report intra-cluster correlation coefficients observed within a large-scale cross-sectional study of general practice in Australia, where the general practitioner (GP) was the primary sampling unit and the patient encounter was the unit of inference. Methods Each year the Bettering the Evaluation and Care of Health (BEACH) study recruits a random sample of approximately 1,000 GPs across Australia. Each GP completes details of 100 consecutive patient encounters. Intra-cluster correlation coefficients were estimated for patient demographics, morbidity managed and treatments received. Intra-cluster correlation coefficients were estimated for descriptive outcomes and for associations between outcomes and predictors and were compared across two independent samples of GPs drawn three years apart. Results Between April 1999 and March 2000, a random sample of 1,047 Australian general practitioners recorded details of 104,700 patient encounters. Intra-cluster correlation coefficients for patient demographics ranged from 0.055 for patient sex to 0.451 for language spoken at home. Intra-cluster correlations for morbidity variables ranged from 0.005 for the management of eye problems to 0.059 for management of psychological problems. Intra-cluster correlation for the association between two variables was smaller than the descriptive intra-cluster correlation of each variable. When compared with the April 2002 to March 2003 sample (1,008 GPs) the estimated intra-cluster correlation coefficients were found to be consistent across samples. Conclusions The demonstrated precision and reliability of the estimated intra-cluster correlations indicate that these coefficients will be useful for calculating sample sizes in future general practice surveys that use the GP as the primary sampling unit. PMID:15613248

  6. Backward deletion to minimize prediction errors in models from factorial experiments with zero to six center points

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1980-01-01

    Population model coefficients were chosen to simulate a saturated 2 to the 4th fixed-effects experiment having an unfavorable distribution of relative values. Using random number studies, deletion strategies were compared that were based on the F-distribution, on an order statistics distribution of Cochran's, and on a combination of the two. The strategies were compared under the criterion of minimizing the maximum prediction error, wherever it occurred, among the two-level factorial points. The strategies were evaluated for each of the conditions of 0, 1, 2, 3, 4, 5, or 6 center points. Three classes of strategies were identified as being appropriate, depending on the extent of the experimenter's prior knowledge. In almost every case the best strategy was found to be unique according to the number of center points. Among the three classes of strategies, a security regret class of strategy was demonstrated as being widely useful in that over a range of coefficients of variation from 4 to 65%, the maximum predictive error was never increased by more than 12% over what it would have been if the best strategy had been used for the particular coefficient of variation. The relative efficiency of the experiment, when using the security regret strategy, was examined as a function of the number of center points, and was found to be best when the design used one center point.

  7. Accurate reconstruction of the optical parameter distribution in participating medium based on the frequency-domain radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Qiao, Yao-Bin; Qi, Hong; Zhao, Fang-Zhou; Ruan, Li-Ming

    2016-12-01

    Reconstructing the distribution of optical parameters in the participating medium based on the frequency-domain radiative transfer equation (FD-RTE) to probe the internal structure of the medium is investigated in the present work. The forward model of FD-RTE is solved via the finite volume method (FVM). The regularization term formatted by the generalized Gaussian Markov random field model is used in the objective function to overcome the ill-posed nature of the inverse problem. The multi-start conjugate gradient (MCG) method is employed to search the minimum of the objective function and increase the efficiency of convergence. A modified adjoint differentiation technique using the collimated radiative intensity is developed to calculate the gradient of the objective function with respect to the optical parameters. All simulation results show that the proposed reconstruction algorithm based on FD-RTE can obtain the accurate distributions of absorption and scattering coefficients. The reconstructed images of the scattering coefficient have less errors than those of the absorption coefficient, which indicates the former are more suitable to probing the inner structure. Project supported by the National Natural Science Foundation of China (Grant No. 51476043), the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 51121004).

  8. Racial discrimination and the stress process.

    PubMed

    Ong, Anthony D; Fuller-Rowell, Thomas; Burrow, Anthony L

    2009-06-01

    The unique and combined effects of chronic and daily racial discrimination on psychological distress were examined in a sample of 174 African American doctoral students and graduates. Using a daily process design, 5 models of the stress process were tested. Multilevel random coefficient modeling analyses revealed that chronic exposure to racial discrimination predicted greater daily discrimination and psychological distress. Further, results show that differences in daily discrimination and negative events accounted for meaningful variation in daily distress responses. Finally, findings indicate that daily discrimination and negative events mediated the relationship between chronic discrimination and psychological distress. The study provides support for the need to measure chronic strains as distinctive from daily stressors in the lives of African Americans.

  9. The leaf-shape effect on electromagnetic scattering from vegetated media

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Fung, A. K.; Blanchard, A. J.; Shen, G. X.

    1988-01-01

    Using the generalized Rayleigh Gans approximation along with the radiative transfer method, a bistatic backscattering model for a layer of randomly oriented, elliptic-shaped leaves is formulated. Following a similar procedure the bistatic scattering model for a layer of needle-shaped leaves is also developed to simulate coniferous vegetation. The differences between the scattering characteristics of the deciduous and coniferous leaves are illustrated numerically for different orientation and incidence angles. It is found that both like and cross polarizations are needed to differentiate the difference in scattering due to the shapes of the scatterers. The calculated backscattering coefficients are compared with measured values from artificial canopies with circular-shaped leaves.

  10. A Geometrical Framework for Covariance Matrices of Continuous and Categorical Variables

    ERIC Educational Resources Information Center

    Vernizzi, Graziano; Nakai, Miki

    2015-01-01

    It is well known that a categorical random variable can be represented geometrically by a simplex. Accordingly, several measures of association between categorical variables have been proposed and discussed in the literature. Moreover, the standard definitions of covariance and correlation coefficient for continuous random variables have been…

  11. Optimisation of an idealised primitive equation ocean model using stochastic parameterization

    NASA Astrophysics Data System (ADS)

    Cooper, Fenwick C.

    2017-05-01

    Using a simple parameterization, an idealised low resolution (biharmonic viscosity coefficient of 5 × 1012 m4s-1 , 128 × 128 grid) primitive equation baroclinic ocean gyre model is optimised to have a much more accurate climatological mean, variance and response to forcing, in all model variables, with respect to a high resolution (biharmonic viscosity coefficient of 8 × 1010 m4s-1 , 512 × 512 grid) equivalent. For example, the change in the climatological mean due to a small change in the boundary conditions is more accurate in the model with parameterization. Both the low resolution and high resolution models are strongly chaotic. We also find that long timescales in the model temperature auto-correlation at depth are controlled by the vertical temperature diffusion parameter and time mean vertical advection and are caused by short timescale random forcing near the surface. This paper extends earlier work that considered a shallow water barotropic gyre. Here the analysis is extended to a more turbulent multi-layer primitive equation model that includes temperature as a prognostic variable. The parameterization consists of a constant forcing, applied to the velocity and temperature equations at each grid point, which is optimised to obtain a model with an accurate climatological mean, and a linear stochastic forcing, that is optimised to also obtain an accurate climatological variance and 5 day lag auto-covariance. A linear relaxation (nudging) is not used. Conservation of energy and momentum is discussed in an appendix.

  12. Effectiveness of a cognitive behavioural workbook for changing beliefs about antipsychotic polypharmacy: analysis from a cluster randomized controlled trial.

    PubMed

    Thompson, Andrew; Sullivan, Sarah; Barley, Maddi; Moore, Laurence; Rogers, Paul; Sipos, Attila; Harrison, Glynn

    2010-06-01

    Educational workbooks have been used in psychiatry to influence patient but not clinician behaviour. Targeted education interventions to change prescribing practice in other areas of medicine have only looked at changes in prescribing and not attitudes or beliefs related to the prescribing. We aimed to examine whether clinicians' beliefs about a common prescribing issue in psychiatry (antipsychotic polypharmacy prescription) changed alongside behaviour as a result of a complex intervention. Medical and nursing staff were recruited from 19 general adult psychiatry units in the south-west of the UK as part of a cluster randomized controlled trial. A questionnaire was used to assess beliefs on the prescribing of antipsychotic polypharmacy as a secondary outcome before and after completion of a cognitive behavioural 'self-help' style workbook (one part of a complex intervention). A factor analysis suggested three dimensions of the questionnaire that corresponded to predetermined themes. The data were analysed using a random-effects regression model (adjusting for clustering) controlling for possible confounders. There was a significant change in beliefs on both of the factors: antipsychotic polypharmacy (coefficient = -0.89, P < 0.01) and rapid tranquilization (coefficient = -0.68, P = 0.01) specifically targeted by the workbook. There was a modest but statistically significant change in antipsychotic polypharmacy prescribing (odds ratio 0.43, 95% confidence intervals 0.21-0.90). The workbook appeared to change staff beliefs about antipsychotic polypharmacy, but achieving substantial changes in clinician behaviour may require further exploration of other factors important in complex prescribing issues.

  13. Entanglement Entropy of Eigenstates of Quantum Chaotic Hamiltonians.

    PubMed

    Vidmar, Lev; Rigol, Marcos

    2017-12-01

    In quantum statistical mechanics, it is of fundamental interest to understand how close the bipartite entanglement entropy of eigenstates of quantum chaotic Hamiltonians is to maximal. For random pure states in the Hilbert space, the average entanglement entropy is known to be nearly maximal, with a deviation that is, at most, a constant. Here we prove that, in a system that is away from half filling and divided in two equal halves, an upper bound for the average entanglement entropy of random pure states with a fixed particle number and normally distributed real coefficients exhibits a deviation from the maximal value that grows with the square root of the volume of the system. Exact numerical results for highly excited eigenstates of a particle number conserving quantum chaotic model indicate that the bound is saturated with increasing system size.

  14. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  15. Basis adaptation and domain decomposition for steady partial differential equations with random coefficients

    DOE PAGES

    Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.

    2017-09-04

    In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less

  16. Basis adaptation and domain decomposition for steady-state partial differential equations with random coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.

    We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numericalmore » experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less

  17. Analysis of bacterial migration. 2: Studies with multiple attractant gradients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, I.; Frymier, P.D.; Hahn, C.M.

    1995-02-01

    Many motile bacteria exhibit chemotaxis, the ability to bias their random motion toward or away from increasing concentrations of chemical substances which benefit or inhibit their survival, respectively. Since bacteria encounter numerous chemical concentration gradients simultaneously in natural surroundings, it is necessary to know quantitatively how a bacterial population responds in the presence of more than one chemical stimulus to develop predictive mathematical models describing bacterial migration in natural systems. This work evaluates three hypothetical models describing the integration of chemical signals from multiple stimuli: high sensitivity, maximum signal, and simple additivity. An expression for the tumbling probability for individualmore » stimuli is modified according to the proposed models and incorporated into the cell balance equation for a 1-D attractant gradient. Random motility and chemotactic sensitivity coefficients, required input parameters for the model, are measured for single stimulus responses. Theoretical predictions with the three signal integration models are compared to the net chemotactic response of Escherichia coli to co- and antidirectional gradients of D-fucose and [alpha]-methylaspartate in the stopped-flow diffusion chamber assay. Results eliminate the high-sensitivity model and favor the simple additivity over the maximum signal. None of the simple models, however, accurately predict the observed behavior, suggesting a more complex model with more steps in the signal processing mechanism is required to predict responses to multiple stimuli.« less

  18. Modeling short duration extreme precipitation patterns using copula and generalized maximum pseudo-likelihood estimation with censoring

    NASA Astrophysics Data System (ADS)

    Bargaoui, Zoubeida Kebaili; Bardossy, Andràs

    2015-10-01

    The paper aims to develop researches on the spatial variability of heavy rainfall events estimation using spatial copula analysis. To demonstrate the methodology, short time resolution rainfall time series from Stuttgart region are analyzed. They are constituted by rainfall observations on continuous 30 min time scale recorded over a network composed by 17 raingages for the period July 1989-July 2004. The analysis is performed aggregating the observations from 30 min up to 24 h. Two parametric bivariate extreme copula models, the Husler-Reiss model and the Gumbel model are investigated. Both involve a single parameter to be estimated. Thus, model fitting is operated for every pair of stations for a giving time resolution. A rainfall threshold value representing a fixed rainfall quantile is adopted for model inference. Generalized maximum pseudo-likelihood estimation is adopted with censoring by analogy with methods of univariate estimation combining historical and paleoflood information with systematic data. Only pairs of observations greater than the threshold are assumed as systematic data. Using the estimated copula parameter, a synthetic copula field is randomly generated and helps evaluating model adequacy which is achieved using Kolmogorov Smirnov distance test. In order to assess dependence or independence in the upper tail, the extremal coefficient which characterises the tail of the joint bivariate distribution is adopted. Hence, the extremal coefficient is reported as a function of the interdistance between stations. If it is less than 1.7, stations are interpreted as dependent in the extremes. The analysis of the fitted extremal coefficients with respect to stations inter distance highlights two regimes with different dependence structures: a short spatial extent regime linked to short duration intervals (from 30 min to 6 h) with an extent of about 8 km and a large spatial extent regime related to longer rainfall intervals (from 12 h to 24 h) with an extent of 34 to 38 km.

  19. Promoting motivation through mode of instruction: The relationship between use of affective teaching techniques and motivation to learn science

    NASA Astrophysics Data System (ADS)

    Sanchez Rivera, Yamil

    The purpose of this study is to add to what we know about the affective domain and to create a valid instrument for future studies. The Motivation to Learn Science (MLS) Inventory is based on Krathwohl's Taxonomy of Affective Behaviors (Krathwohl et al., 1964). The results of the Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) demonstrated that the MLS Inventory is a valid and reliable instrument. Therefore, the MLS Inventory is a uni-dimensional instrument composed of 9 items with convergent validity (no divergence). The instrument had a high Chronbach Alpha value of .898 during the EFA analysis and .919 with the CFA analysis. Factor loadings on the 9 items ranged from .617 to .800. Standardized regression weights ranged from .639 to .835 in the CFA analysis. Various indices (RMSEA = .033; NFI = .987; GFI = .985; CFI = 1.000) demonstrated a good fitness of the proposed model. Hierarchical linear modeling was used to statistical analyze data where students' motivation to learn science scores (level-1) were nested within teachers (level-2). The analysis was geared toward identifying if teachers' use of affective behavior (a level-2 classroom variable) was significantly related with students' MLS scores (level-1 criterion variable). Model testing proceeded in three phases: intercept-only model, means-as-outcome model, and a random-regression coefficient model. The intercept-only model revealed an intra-class correlation coefficient of .224 with an estimated reliability of .726. Therefore, data suggested that only 22.4% of the variance in MLS scores is between-classes and the remaining 77.6% is at the student-level. Due to the significant variance in MLS scores, X2(62.756, p<.0001), teachers' TAB scores were added as a level-2 predictor. The regression coefficient was non-significant (p>.05). Therefore, the teachers' self-reported use of affective behaviors was not a significant predictor of students' motivation to learn science.

  20. A syringe-sharing model for the spread of HIV: application to Omsk, Western Siberia.

    PubMed

    Artzrouni, Marc; Leonenko, Vasiliy N; Mara, Thierry A

    2017-03-01

    A system of two differential equations is used to model the transmission dynamics of human immunodeficiency virus between 'persons who inject drugs' (PWIDs) and their syringes. Our vector-borne disease model hinges on a metaphorical urn from which PWIDs draw syringes at random which may or may not be infected and may or may not result in one of the two agents becoming infected. The model's parameters are estimated with data mostly from the city of Omsk in Western Siberia. A linear trend in PWID prevalence in Omsk could only be fitted by considering a time-dependent version of the model captured through a secular decrease in the probability that PWIDs decide to share a syringe. A global sensitivity analysis is performed with 14 parameters considered random variables in order to assess their impact on average numbers infected over a 50-year projection. With obvious intervention implications the drug injection rate and the probability of syringe-cleansing are the only parameters whose coefficients of correlations with numbers of infected PWIDs and infected syringes have an absolute value close to or larger than 0.40. © The authors 2015. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  1. An Improved Method of Predicting Extinction Coefficients for the Determination of Protein Concentration.

    PubMed

    Hilario, Eric C; Stern, Alan; Wang, Charlie H; Vargas, Yenny W; Morgan, Charles J; Swartz, Trevor E; Patapoff, Thomas W

    2017-01-01

    Concentration determination is an important method of protein characterization required in the development of protein therapeutics. There are many known methods for determining the concentration of a protein solution, but the easiest to implement in a manufacturing setting is absorption spectroscopy in the ultraviolet region. For typical proteins composed of the standard amino acids, absorption at wavelengths near 280 nm is due to the three amino acid chromophores tryptophan, tyrosine, and phenylalanine in addition to a contribution from disulfide bonds. According to the Beer-Lambert law, absorbance is proportional to concentration and path length, with the proportionality constant being the extinction coefficient. Typically the extinction coefficient of proteins is experimentally determined by measuring a solution absorbance then experimentally determining the concentration, a measurement with some inherent variability depending on the method used. In this study, extinction coefficients were calculated based on the measured absorbance of model compounds of the four amino acid chromophores. These calculated values for an unfolded protein were then compared with an experimental concentration determination based on enzymatic digestion of proteins. The experimentally determined extinction coefficient for the native proteins was consistently found to be 1.05 times the calculated value for the unfolded proteins for a wide range of proteins with good accuracy and precision under well-controlled experimental conditions. The value of 1.05 times the calculated value was termed the predicted extinction coefficient. Statistical analysis shows that the differences between predicted and experimentally determined coefficients are scattered randomly, indicating no systematic bias between the values among the proteins measured. The predicted extinction coefficient was found to be accurate and not subject to the inherent variability of experimental methods. We propose the use of a predicted extinction coefficient for determining the protein concentration of therapeutic proteins starting from early development through the lifecycle of the product. LAY ABSTRACT: Knowing the concentration of a protein in a pharmaceutical solution is important to the drug's development and posology. There are many ways to determine the concentration, but the easiest one to use in a testing lab employs absorption spectroscopy. Absorbance of ultraviolet light by a protein solution is proportional to its concentration and path length; the proportionality constant is the extinction coefficient. The extinction coefficient of a protein therapeutic is usually determined experimentally during early product development and has some inherent method variability. In this study, extinction coefficients of several proteins were calculated based on the measured absorbance of model compounds. These calculated values for an unfolded protein were then compared with experimental concentration determinations based on enzymatic digestion of the proteins. The experimentally determined extinction coefficient for the native protein was 1.05 times the calculated value for the unfolded protein with good accuracy and precision under controlled experimental conditions, so the value of 1.05 times the calculated coefficient was called the predicted extinction coefficient. Comparison of predicted and measured extinction coefficients indicated that the predicted value was very close to the experimentally determined values for the proteins. The predicted extinction coefficient was accurate and removed the variability inherent in experimental methods. © PDA, Inc. 2017.

  2. Linear mixed-effects models to describe individual tree crown width for China-fir in Fujian Province, southeast China.

    PubMed

    Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu

    2015-01-01

    A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.

  3. Pharmacophore modeling and virtual screening to identify potential RET kinase inhibitors.

    PubMed

    Shih, Kuei-Chung; Shiau, Chung-Wai; Chen, Ting-Shou; Ko, Ching-Huai; Lin, Chih-Lung; Lin, Chun-Yuan; Hwang, Chrong-Shiong; Tang, Chuan-Yi; Chen, Wan-Ru; Huang, Jui-Wen

    2011-08-01

    Chemical features based 3D pharmacophore model for REarranged during Transfection (RET) tyrosine kinase were developed by using a training set of 26 structurally diverse known RET inhibitors. The best pharmacophore hypothesis, which identified inhibitors with an associated correlation coefficient of 0.90 between their experimental and estimated anti-RET values, contained one hydrogen-bond acceptor, one hydrogen-bond donor, one hydrophobic, and one ring aromatic features. The model was further validated by a testing set, Fischer's randomization test, and goodness of hit (GH) test. We applied this pharmacophore model to screen NCI database for potential RET inhibitors. The hits were docked to RET with GOLD and CDOCKER after filtering by Lipinski's rules. Ultimately, 24 molecules were selected as potential RET inhibitors for further investigation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Anomalous diffusion in the evolution of soccer championship scores: Real data, mean-field analysis, and an agent-based model

    NASA Astrophysics Data System (ADS)

    da Silva, Roberto; Vainstein, Mendeli H.; Gonçalves, Sebastián; Paula, Felipe S. F.

    2013-08-01

    Statistics of soccer tournament scores based on the double round robin system of several countries are studied. Exploring the dynamics of team scoring during tournament seasons from recent years we find evidences of superdiffusion. A mean-field analysis results in a drift velocity equal to that of real data but in a different diffusion coefficient. Along with the analysis of real data we present the results of simulations of soccer tournaments obtained by an agent-based model which successfully describes the final scoring distribution [da Silva , Comput. Phys. Commun.CPHCBZ0010-465510.1016/j.cpc.2012.10.030 184, 661 (2013)]. Such model yields random walks of scores over time with the same anomalous diffusion as observed in real data.

  5. The incidence of the different sources of noise on the uncertainty in radiochromic film dosimetry using single channel and multichannel methods

    NASA Astrophysics Data System (ADS)

    González-López, Antonio; Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen

    2017-11-01

    The influence of the various sources of noise on the uncertainty in radiochromic film (RCF) dosimetry using single channel and multichannel methods is investigated in this work. These sources of noise are extracted from pixel value (PV) readings and dose maps. Pieces of an RCF were each irradiated to different uniform doses, ranging from 0 to 1092 cGy. Then, the pieces were read at two resolutions (72 and 150 ppp) with two flatbed scanners: Epson 10000XL and Epson V800, representing two states of technology. Noise was extracted as described in ISO 15739 (2013), separating its distinct constituents: random noise and fixed pattern (FP) noise. Regarding the PV maps, FP noise is the main source of noise for both models of digitizer. Also, the standard deviation of the random noise in the 10000XL model is almost twice that of the V800 model. In the dose maps, the FP noise is smaller in the multichannel method than in the single channel ones. However, random noise is higher in this method, throughout the dose range. In the multichannel method, FP noise is reduced, as a consequence of this method’s ability to eliminate channel independent perturbations. However, the random noise increases, because the dose is calculated as a linear combination of the doses obtained by the single channel methods. The values of the coefficients of this linear combination are obtained in the present study, and the root of the sum of their squares is shown to range between 0.9 and 1.9 over the dose range studied. These results indicate the random noise to play a fundamental role in the uncertainty of RCF dosimetry: low levels of random noise are required in the digitizer to fully exploit the advantages of the multichannel dosimetry method. This is particularly important for measuring high doses at high spatial resolutions.

  6. Explicating the Conditions Under Which Multilevel Multiple Imputation Mitigates Bias Resulting from Random Coefficient-Dependent Missing Longitudinal Data.

    PubMed

    Gottfredson, Nisha C; Sterba, Sonya K; Jackson, Kristina M

    2017-01-01

    Random coefficient-dependent (RCD) missingness is a non-ignorable mechanism through which missing data can arise in longitudinal designs. RCD, for which we cannot test, is a problematic form of missingness that occurs if subject-specific random effects correlate with propensity for missingness or dropout. Particularly when covariate missingness is a problem, investigators typically handle missing longitudinal data by using single-level multiple imputation procedures implemented with long-format data, which ignores within-person dependency entirely, or implemented with wide-format (i.e., multivariate) data, which ignores some aspects of within-person dependency. When either of these standard approaches to handling missing longitudinal data is used, RCD missingness leads to parameter bias and incorrect inference. We explain why multilevel multiple imputation (MMI) should alleviate bias induced by a RCD missing data mechanism under conditions that contribute to stronger determinacy of random coefficients. We evaluate our hypothesis with a simulation study. Three design factors are considered: intraclass correlation (ICC; ranging from .25 to .75), number of waves (ranging from 4 to 8), and percent of missing data (ranging from 20 to 50%). We find that MMI greatly outperforms the single-level wide-format (multivariate) method for imputation under a RCD mechanism. For the MMI analyses, bias was most alleviated when the ICC is high, there were more waves of data, and when there was less missing data. Practical recommendations for handling longitudinal missing data are suggested.

  7. Dissecting random and systematic differences between noisy composite data sets.

    PubMed

    Diederichs, Kay

    2017-04-01

    Composite data sets measured on different objects are usually affected by random errors, but may also be influenced by systematic (genuine) differences in the objects themselves, or the experimental conditions. If the individual measurements forming each data set are quantitative and approximately normally distributed, a correlation coefficient is often used to compare data sets. However, the relations between data sets are not obvious from the matrix of pairwise correlations since the numerical value of the correlation coefficient is lowered by both random and systematic differences between the data sets. This work presents a multidimensional scaling analysis of the pairwise correlation coefficients which places data sets into a unit sphere within low-dimensional space, at a position given by their CC* values [as defined by Karplus & Diederichs (2012), Science, 336, 1030-1033] in the radial direction and by their systematic differences in one or more angular directions. This dimensionality reduction can not only be used for classification purposes, but also to derive data-set relations on a continuous scale. Projecting the arrangement of data sets onto the subspace spanned by systematic differences (the surface of a unit sphere) allows, irrespective of the random-error levels, the identification of clusters of closely related data sets. The method gains power with increasing numbers of data sets. It is illustrated with an example from low signal-to-noise ratio image processing, and an application in macromolecular crystallography is shown, but the approach is completely general and thus should be widely applicable.

  8. [Application of wavelet transform and neural network in the near-infrared spectrum analysis of oil shale].

    PubMed

    Li, Su-Yi; Ji, Yan-Ju; Liu, Wei-Yu; Wang, Zhi-Hong

    2013-04-01

    In the present study, an innovative method is proposed, employing both wavelet transform and neural network, to analyze the near-infrared spectrum data in oil shale survey. The method entails using db8 wavelet at 3 levels decomposition to process raw data, using the transformed data as the input matrix, and creating the model through neural network. To verify the validity of the method, this study analyzes 30 synthesized oil shale samples, in which 20 samples are randomly selected for network training, the other 10 for model prediction, and uses the full spectrum and the wavelet transformed spectrum to carry out 10 network models, respectively. Results show that the mean speed of the full spectrum neural network modeling is 570.33 seconds, and the predicted residual sum of squares (PRESS) and correlation coefficient of prediction are 0.006 012 and 0.843 75, respectively. In contrast, the mean speed of the wavelet network modeling method is 3.15 seconds, and the mean PRESS and correlation coefficient of prediction are 0.002 048 and 0.953 19, respectively. These results demonstrate that the wavelet neural network modeling method is significantly superior to the full spectrum neural network modeling method. This study not only provides a new method for more efficient and accurate detection of the oil content of oil shale, but also indicates the potential for applying wavelet transform and neutral network in broad near-infrared spectrum analysis.

  9. Smsynth: AN Imagery Synthesis System for Soil Moisture Retrieval

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Xu, L.; Peng, J.

    2018-04-01

    Soil moisture (SM) is a important variable in various research areas, such as weather and climate forecasting, agriculture, drought and flood monitoring and prediction, and human health. An ongoing challenge in estimating SM via synthetic aperture radar (SAR) is the development of the retrieval SM methods, especially the empirical models needs as training samples a lot of measurements of SM and soil roughness parameters which are very difficult to acquire. As such, it is difficult to develop empirical models using realistic SAR imagery and it is necessary to develop methods to synthesis SAR imagery. To tackle this issue, a SAR imagery synthesis system based on the SM named SMSynth is presented, which can simulate radar signals that are realistic as far as possible to the real SAR imagery. In SMSynth, SAR backscatter coefficients for each soil type are simulated via the Oh model under the Bayesian framework, where the spatial correlation is modeled by the Markov random field (MRF) model. The backscattering coefficients simulated based on the designed soil parameters and sensor parameters are added into the Bayesian framework through the data likelihood where the soil parameters and sensor parameters are set as realistic as possible to the circumstances on the ground and in the validity range of the Oh model. In this way, a complete and coherent Bayesian probabilistic framework is established. Experimental results show that SMSynth is capable of generating realistic SAR images that suit the needs of a large amount of training samples of empirical models.

  10. Random field assessment of nanoscopic inhomogeneity of bone

    PubMed Central

    Dong, X. Neil; Luo, Qing; Sparkman, Daniel M.; Millwater, Harry R.; Wang, Xiaodu

    2010-01-01

    Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to present the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. PMID:20817128

  11. The method of micro-motion cycle feature extraction based on confidence coefficient evaluation criteria

    NASA Astrophysics Data System (ADS)

    Tang, Chuanzi; Ren, Hongmei; Bo, Li; Jing, Huang

    2017-11-01

    In radar target recognition, the micro motion characteristics of target is one of the characteristics that researchers pay attention to at home and abroad, in which the characteristics of target precession cycle is one of the important characteristics of target movement characteristics. Periodic feature extraction methods have been studied for years, the complex shape of the target and the scattering center stack lead to random fluctuations of the RCS. These random fluctuations also exist certain periodicity, which has a great influence on the target recognition result. In order to solve the problem, this paper proposes a extraction method of micro-motion cycle feature based on confidence coefficient evaluation criteria.

  12. A statistical methodology for estimating transport parameters: Theory and applications to one-dimensional advectivec-dispersive systems

    USGS Publications Warehouse

    Wagner, Brian J.; Gorelick, Steven M.

    1986-01-01

    A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.

  13. Dose-Dependent Effects of Statins for Patients with Aneurysmal Subarachnoid Hemorrhage: Meta-Regression Analysis.

    PubMed

    To, Minh-Son; Prakash, Shivesh; Poonnoose, Santosh I; Bihari, Shailesh

    2018-05-01

    The study uses meta-regression analysis to quantify the dose-dependent effects of statin pharmacotherapy on vasospasm, delayed ischemic neurologic deficits (DIND), and mortality in aneurysmal subarachnoid hemorrhage. Prospective, retrospective observational studies, and randomized controlled trials (RCTs) were retrieved by a systematic database search. Summary estimates were expressed as absolute risk (AR) for a given statin dose or control (placebo). Meta-regression using inverse variance weighting and robust variance estimation was performed to assess the effect of statin dose on transformed AR in a random effects model. Dose-dependence of predicted AR with 95% confidence interval (CI) was recovered by using Miller's Freeman-Tukey inverse. The database search and study selection criteria yielded 18 studies (2594 patients) for analysis. These included 12 RCTs, 4 retrospective observational studies, and 2 prospective observational studies. Twelve studies investigated simvastatin, whereas the remaining studies investigated atorvastatin, pravastatin, or pitavastatin, with simvastatin-equivalent doses ranging from 20 to 80 mg. Meta-regression revealed dose-dependent reductions in Freeman-Tukey-transformed AR of vasospasm (slope coefficient -0.00404, 95% CI -0.00720 to -0.00087; P = 0.0321), DIND (slope coefficient -0.00316, 95% CI -0.00586 to -0.00047; P = 0.0392), and mortality (slope coefficient -0.00345, 95% CI -0.00623 to -0.00067; P = 0.0352). The present meta-regression provides weak evidence for dose-dependent reductions in vasospasm, DIND and mortality associated with acute statin use after aneurysmal subarachnoid hemorrhage. However, the analysis was limited by substantial heterogeneity among individual studies. Greater dosing strategies are a potential consideration for future RCTs. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. A novel prediction approach for antimalarial activities of Trimethoprim, Pyrimethamine, and Cycloguanil analogues using extremely randomized trees.

    PubMed

    Nattee, Cholwich; Khamsemanan, Nirattaya; Lawtrakul, Luckhana; Toochinda, Pisanu; Hannongbua, Supa

    2017-01-01

    Malaria is still one of the most serious diseases in tropical regions. This is due in part to the high resistance against available drugs for the inhibition of parasites, Plasmodium, the cause of the disease. New potent compounds with high clinical utility are urgently needed. In this work, we created a novel model using a regression tree to study structure-activity relationships and predict the inhibition constant, K i of three different antimalarial analogues (Trimethoprim, Pyrimethamine, and Cycloguanil) based on their molecular descriptors. To the best of our knowledge, this work is the first attempt to study the structure-activity relationships of all three analogues combined. The most relevant descriptors and appropriate parameters of the regression tree are harvested using extremely randomized trees. These descriptors are water accessible surface area, Log of the aqueous solubility, total hydrophobic van der Waals surface area, and molecular refractivity. Out of all possible combinations of these selected parameters and descriptors, the tree with the strongest coefficient of determination is selected to be our prediction model. Predicted K i values from the proposed model show a strong coefficient of determination, R 2 =0.996, to experimental K i values. From the structure of the regression tree, compounds with high accessible surface area of all hydrophobic atoms (ASA_H) and low aqueous solubility of inhibitors (Log S) generally possess low K i values. Our prediction model can also be utilized as a screening test for new antimalarial drug compounds which may reduce the time and expenses for new drug development. New compounds with high predicted K i should be excluded from further drug development. It is also our inference that a threshold of ASA_H greater than 575.80 and Log S less than or equal to -4.36 is a sufficient condition for a new compound to possess a low K i . Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Effect of Items Direction (Positive or Negative) on the Reliability in Likert Scale. Paper-11

    ERIC Educational Resources Information Center

    Gul, Showkeen Bilal Ahmad; Qasem, Mamun Ali Naji; Bhat, Mehraj Ahmad

    2015-01-01

    In this paper an attempt was made to analyze the effect of items direction (positive or negative) on the Alpha Cronbach reliability coefficient and the Split Half reliability coefficient in Likert scale. The descriptive survey research method was used for the study and sample of 510 undergraduate students were selected by used random sampling…

  16. Inelastic collapse and near-wall localization of randomly accelerated particles.

    PubMed

    Belan, S; Chernykh, A; Lebedev, V; Falkovich, G

    2016-05-01

    Inelastic collapse of stochastic trajectories of a randomly accelerated particle moving in half-space z>0 has been discovered by McKean [J. Math. Kyoto Univ. 2, 227 (1963)] and then independently rediscovered by Cornell et al. [Phys. Rev. Lett. 81, 1142 (1998)PRLTAO0031-900710.1103/PhysRevLett.81.1142]. The essence of this phenomenon is that the particle arrives at the wall at z=0 with zero velocity after an infinite number of inelastic collisions if the restitution coefficient β of particle velocity is smaller than the critical value β_{c}=exp(-π/sqrt[3]). We demonstrate that inelastic collapse takes place also in a wide class of models with spatially inhomogeneous random forcing and, what is more, that the critical value β_{c} is universal. That class includes an important case of inertial particles in wall-bounded random flows. To establish how inelastic collapse influences the particle distribution, we derive the exact equilibrium probability density function ρ(z,v) for the particle position and velocity. The equilibrium distribution exists only at β<β_{c} and indicates that inelastic collapse does not necessarily imply near-wall localization.

  17. Statistical inference for Hardy-Weinberg proportions in the presence of missing genotype information.

    PubMed

    Graffelman, Jan; Sánchez, Milagros; Cook, Samantha; Moreno, Victor

    2013-01-01

    In genetic association studies, tests for Hardy-Weinberg proportions are often employed as a quality control checking procedure. Missing genotypes are typically discarded prior to testing. In this paper we show that inference for Hardy-Weinberg proportions can be biased when missing values are discarded. We propose to use multiple imputation of missing values in order to improve inference for Hardy-Weinberg proportions. For imputation we employ a multinomial logit model that uses information from allele intensities and/or neighbouring markers. Analysis of an empirical data set of single nucleotide polymorphisms possibly related to colon cancer reveals that missing genotypes are not missing completely at random. Deviation from Hardy-Weinberg proportions is mostly due to a lack of heterozygotes. Inbreeding coefficients estimated by multiple imputation of the missings are typically lowered with respect to inbreeding coefficients estimated by discarding the missings. Accounting for missings by multiple imputation qualitatively changed the results of 10 to 17% of the statistical tests performed. Estimates of inbreeding coefficients obtained by multiple imputation showed high correlation with estimates obtained by single imputation using an external reference panel. Our conclusion is that imputation of missing data leads to improved statistical inference for Hardy-Weinberg proportions.

  18. Hierarchical coefficient of a multifractal based network

    NASA Astrophysics Data System (ADS)

    Moreira, Darlan A.; Lucena, Liacir dos Santos; Corso, Gilberto

    2014-02-01

    The hierarchical property for a general class of networks stands for a power-law relation between clustering coefficient, CC and connectivity k: CC∝kβ. This relation is empirically verified in several biologic and social networks, as well as in random and deterministic network models, in special for hierarchical networks. In this work we show that the hierarchical property is also present in a Lucena network. To create a Lucena network we use the dual of a multifractal lattice ML, the vertices are the sites of the ML and links are established between neighbouring lattices, therefore this network is space filling and planar. Besides a Lucena network shows a scale-free distribution of connectivity. We deduce a relation for the maximal local clustering coefficient CCimax of a vertex i in a planar graph. This condition expresses that the number of links among neighbour, N△, of a vertex i is equal to its connectivity ki, that means: N△=ki. The Lucena network fulfils the condition N△≃ki independent of ki and the anisotropy of ML. In addition, CCmax implies the threshold β=1 for the hierarchical property for any scale-free planar network.

  19. A Statistical Method for Synthesizing Mediation Analyses Using the Product of Coefficient Approach Across Multiple Trials

    PubMed Central

    Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks

    2016-01-01

    Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330

  20. The reliability of the Australasian Triage Scale: a meta-analysis

    PubMed Central

    Ebrahimi, Mohsen; Heydari, Abbas; Mazlom, Reza; Mirhaghi, Amir

    2015-01-01

    BACKGROUND: Although the Australasian Triage Scale (ATS) has been developed two decades ago, its reliability has not been defined; therefore, we present a meta-analyis of the reliability of the ATS in order to reveal to what extent the ATS is reliable. DATA SOURCES: Electronic databases were searched to March 2014. The included studies were those that reported samples size, reliability coefficients, and adequate description of the ATS reliability assessment. The guidelines for reporting reliability and agreement studies (GRRAS) were used. Two reviewers independently examined abstracts and extracted data. The effect size was obtained by the z-transformation of reliability coefficients. Data were pooled with random-effects models, and meta-regression was done based on the method of moment’s estimator. RESULTS: Six studies were included in this study at last. Pooled coefficient for the ATS was substantial 0.428 (95%CI 0.340–0.509). The rate of mis-triage was less than fifty percent. The agreement upon the adult version is higher than the pediatric version. CONCLUSION: The ATS has shown an acceptable level of overall reliability in the emergency department, but it needs more development to reach an almost perfect agreement. PMID:26056538

  1. 'Chain pooling' model selection as developed for the statistical analysis of a rotor burst protection experiment

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1977-01-01

    A statistical decision procedure called chain pooling had been developed for model selection in fitting the results of a two-level fixed-effects full or fractional factorial experiment not having replication. The basic strategy included the use of one nominal level of significance for a preliminary test and a second nominal level of significance for the final test. The subject has been reexamined from the point of view of using as many as three successive statistical model deletion procedures in fitting the results of a single experiment. The investigation consisted of random number studies intended to simulate the results of a proposed aircraft turbine-engine rotor-burst-protection experiment. As a conservative approach, population model coefficients were chosen to represent a saturated 2 to the 4th power experiment with a distribution of parameter values unfavorable to the decision procedures. Three model selection strategies were developed.

  2. Performance of the likelihood ratio difference (G2 Diff) test for detecting unidimensionality in applications of the multidimensional Rasch model.

    PubMed

    Harrell-Williams, Leigh; Wolfe, Edward W

    2014-01-01

    Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.

  3. Intergrannular strain evolution in a zircaloy-4 alloy with Widmanstatten microstructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clausen, Bjorn; Vogel, Sven C; Garlea, Eena

    2009-01-01

    A Zircaloy-4 alloy with Widmanstatten-Basketweave microstructure and random texture has been used to study the deformation systems responsible for the polycrystalline plasticity at the grain level. The evolution of internal strain and bulk texture is investigated using neutron diffraction and an elasto-plastic self-consistent (EPSC) modeling scheme. The macroscopic stress-strain behavior and intergranular (hkil-specific) strain development, parallel and perpendicular to the loading direction, were measured in-situ during uniaxial tensile loading. Then, the EPSC model was employed to simulate the experimental results. This modeling scheme accounts for the thermal anisotropy; elastic-plastic properties of the constituent grains; and activation, reorientation, and stress relaxationmore » associated with twinning. The agreement between the experiment and the model will be discussed as well as the critical resolved shear stresses (CRSS) and the hardening coefficients obtained from the model.« less

  4. Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach

    ERIC Educational Resources Information Center

    Rotondi, Michael A.; Donner, Allan

    2009-01-01

    The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…

  5. Hájek-Rényi inequality for m-asymptotically almost negatively associated random vectors in Hilbert space and applications.

    PubMed

    Ko, Mi-Hwa

    2018-01-01

    In this paper, we obtain the Hájek-Rényi inequality and, as an application, we study the strong law of large numbers for H -valued m -asymptotically almost negatively associated random vectors with mixing coefficients [Formula: see text] such that [Formula: see text].

  6. The genetic consequences of selection in natural populations.

    PubMed

    Thurman, Timothy J; Barrett, Rowan D H

    2016-04-01

    The selection coefficient, s, quantifies the strength of selection acting on a genetic variant. Despite this parameter's central importance to population genetic models, until recently we have known relatively little about the value of s in natural populations. With the development of molecular genetic techniques in the late 20th century and the sequencing technologies that followed, biologists are now able to identify genetic variants and directly relate them to organismal fitness. We reviewed the literature for published estimates of natural selection acting at the genetic level and found over 3000 estimates of selection coefficients from 79 studies. Selection coefficients were roughly exponentially distributed, suggesting that the impact of selection at the genetic level is generally weak but can occasionally be quite strong. We used both nonparametric statistics and formal random-effects meta-analysis to determine how selection varies across biological and methodological categories. Selection was stronger when measured over shorter timescales, with the mean magnitude of s greatest for studies that measured selection within a single generation. Our analyses found conflicting trends when considering how selection varies with the genetic scale (e.g., SNPs or haplotypes) at which it is measured, suggesting a need for further research. Besides these quantitative conclusions, we highlight key issues in the calculation, interpretation, and reporting of selection coefficients and provide recommendations for future research. © 2016 John Wiley & Sons Ltd.

  7. Stratification strategy for evaluating the influence of diabetes complication severity index on the risk of hospitalization: a record linkage data in Western Australia.

    PubMed

    Ha, Ninh Thi; Harris, Mark; Robinson, Suzanne; Preen, David; Moorin, Rachael

    2017-07-01

    This study aimed to develop a risk stratification strategy for evaluating the relationship between complications of diabetes and the risk of diabetic-related hospitalization to accurately classify diabetes severity. The study used administrative health records for 40,624 individuals with diabetes aged ≥18years in Western Australian. The adapted Diabetes Complication Severity Index (DCSI), socio-demographic and clinical characteristics were used in random effects negative binomial and threshold effect models to determine the optimal stratification strategy for diabetes severity based on the homogeneity of the risk of hospitalization in response to variation of the DCSI. The optimal stratification of people with diabetes was specified by four sub-populations. The first sub-population was no complications with an inverse association with the risk of hospitalizations (coefficient-0.247, SE 0.03). Further three sub-populations with DCSI at one (coefficient 0.289, SE 0.01), two (coefficient 0.339, SE 0.01) and three or more (coefficient 0.381, SE 0.01) were used to accurately describe the impact of DCSI on the risk of hospitalization. A stratification into four subpopulations based on the homogeneous impact of diabetes DCSI on the risk of hospitalization may be more suitable for evaluating health care interventions and planning health care provision. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Improving the prospects of cleavage-based nanopore sequencing engines

    NASA Astrophysics Data System (ADS)

    Brady, Kyle T.; Reiner, Joseph E.

    2015-08-01

    Recently proposed methods for DNA sequencing involve the use of cleavage-based enzymes attached to the opening of a nanopore. The idea is that DNA interacting with either an exonuclease or polymerase protein will lead to a small molecule being cleaved near the mouth of the nanopore, and subsequent entry into the pore will yield information about the DNA sequence. The prospects for this approach seem promising, but it has been shown that diffusion related effects impose a limit on the capture probability of molecules by the pore, which limits the efficacy of the technique. Here, we revisit the problem with the goal of optimizing the capture probability via a step decrease in the nucleotide diffusion coefficient between the pore and bulk solutions. It is shown through random walk simulations and a simplified analytical model that decreasing the molecule's diffusion coefficient in the bulk relative to its value in the pore increases the nucleotide capture probability. Specifically, we show that at sufficiently high applied transmembrane potentials (≥100 mV), increasing the potential by a factor f is equivalent to decreasing the diffusion coefficient ratio Dbulk/Dpore by the same factor f. This suggests a promising route toward implementation of cleavage-based sequencing protocols. We also discuss the feasibility of forming a step function in the diffusion coefficient across the pore-bulk interface.

  9. A two-level stochastic collocation method for semilinear elliptic equations with random coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Luoping; Zheng, Bin; Lin, Guang

    In this work, we propose a novel two-level discretization for solving semilinear elliptic equations with random coefficients. Motivated by the two-grid method for deterministic partial differential equations (PDEs) introduced by Xu, our two-level stochastic collocation method utilizes a two-grid finite element discretization in the physical space and a two-level collocation method in the random domain. In particular, we solve semilinear equations on a coarse meshmore » $$\\mathcal{T}_H$$ with a low level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_{P}$$) and solve linearized equations on a fine mesh $$\\mathcal{T}_h$$ using high level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_p$$). We prove that the approximated solution obtained from this method achieves the same order of accuracy as that from solving the original semilinear problem directly by stochastic collocation method with $$\\mathcal{T}_h$$ and $$\\mathcal{P}_p$$. The two-level method is computationally more efficient, especially for nonlinear problems with high random dimensions. Numerical experiments are also provided to verify the theoretical results.« less

  10. Statistical analysis for improving data precision in the SPME GC-MS analysis of blackberry (Rubus ulmifolius Schott) volatiles.

    PubMed

    D'Agostino, M F; Sanz, J; Martínez-Castro, I; Giuffrè, A M; Sicari, V; Soria, A C

    2014-07-01

    Statistical analysis has been used for the first time to evaluate the dispersion of quantitative data in the solid-phase microextraction (SPME) followed by gas chromatography-mass spectrometry (GC-MS) analysis of blackberry (Rubus ulmifolius Schott) volatiles with the aim of improving their precision. Experimental and randomly simulated data were compared using different statistical parameters (correlation coefficients, Principal Component Analysis loadings and eigenvalues). Non-random factors were shown to significantly contribute to total dispersion; groups of volatile compounds could be associated with these factors. A significant improvement of precision was achieved when considering percent concentration ratios, rather than percent values, among those blackberry volatiles with a similar dispersion behavior. As novelty over previous references, and to complement this main objective, the presence of non-random dispersion trends in data from simple blackberry model systems was evidenced. Although the influence of the type of matrix on data precision was proved, the possibility of a better understanding of the dispersion patterns in real samples was not possible from model systems. The approach here used was validated for the first time through the multicomponent characterization of Italian blackberries from different harvest years. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Effects of inbreeding and other systematic effects on fertility of Black Forest Draught horses in Germany.

    PubMed

    Müller-Unterberg, Maarit; Wallmann, Sandra; Distl, Ottmar

    2017-10-18

    The Black Forest Draught horse (BFDH) is an endangered German coldblood breed with its origin in the area of the Black Forest in South Germany. In this retrospective study, the influence of the inbreeding coefficient on foaling rates was investigated using records from ten breeding seasons. Due to the small population size of BFDH, the level of inbreeding is increasing and may have an effect on foaling rates.The data of the present study included all coverings reported for 1024 BFDH mares in the years 2001-2009. These mares were covered by 32 BFDH stallions from the State Stud Marbach. Data from 4534 estrus cycles was used to calculate per cycle foaling rate (CFR). Pedigree data contained all studbook data up to the foundation of the breed as early as 1836. The level of inbreeding of the mare, stallion and expected foal along with other systematic effects on CFR were analysed using a generalized linear mixed model approach. Stallion was employed as a random effect. Systematic fixed effects were month of mating, mating type, age of the mare and stallion, reproductive status of the mare and stallion line of the mare. Inbreeding coefficients of the stallion, mare and expected foal were modelled as linear covariates. The average CFR was 40.9%. The mean inbreeding coefficients of the mares, stallions and expected foals were 7.46, 7.70 and 9.66%. Mating type, age of the mare, reproductive status of the mare and stallion line of the mare had a significant effect. The results showed that the mating type, stallion line of the mare, sire, age and reproductive status of the mare exerted the largest influences on CFR in BFDH. Inbreeding coefficients of the stallion, mare and expected foal were not significantly related with CFR.

  12. A simple expression for quantifying bacterial chemotaxis using capillary assay data: application to the analysis of enhanced chemotactic responses from growth-limited cultures.

    PubMed

    Ford, R M; Lauffenburger, D A

    1992-05-01

    An individual cell-based mathematical model of Rivero et al. provides a framework for determining values of the chemotactic sensitivity coefficient chi 0, an intrinsic cell population parameter that characterizes the chemotactic response of bacterial populations. This coefficient can theoretically relate the swimming behavior of individual cells to the resulting migration of a bacterial population. When this model is applied to the commonly used capillary assay, an approximate solution can be obtained for a particular range of chemotactic strengths yielding a very simple analytical expression for estimating the value of chi 0, [formula: see text] from measurements of cell accumulation in the capillary, N, when attractant uptake is negligible. A0 and A infinity are the dimensionless attractant concentrations initially present at the mouth of the capillary and far into the capillary, respectively, which are scaled by Kd, the effective dissociation constant for receptor-attractant binding. D is the attractant diffusivity, and mu is the cell random motility coefficient. NRM is the cell accumulation in the capillary in the absence of an attractant gradient, from which mu can be determined independently as mu = (pi/4t)(NRM/pi r2bc)2, with r the capillary tube radius and bc the bacterial density initially in the chamber. When attractant uptake is significant, a slightly more involved procedure requiring a simple numerical integration becomes necessary. As an example, we apply this approach to quantitatively characterize, in terms of the chemotactic sensitivity coefficient chi 0, data from Terracciano indicating enhanced chemotactic responses of Escherichia coli to galactose when cultured under growth-limiting galactose levels in a chemostat.

  13. Mixture-mixture design for the fingerprint optimization of chromatographic mobile phases and extraction solutions for Camellia sinensis.

    PubMed

    Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S

    2007-07-09

    A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.

  14. Global CO2 flux inversions from remote-sensing data with systematic errors using hierarchical statistical models

    NASA Astrophysics Data System (ADS)

    Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel

    2017-04-01

    The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical approaches through an Observing System Simulation Experiment (OSSE) on a global scale. By changing the size of the random and systematic errors in the OSSE, we can determine the corresponding spatial and temporal resolutions at which useful flux signals could be detected from the OCO-2 data.

  15. [Reliability and validity of the modified Perceived Health Competence Scale (PHCS) Japanese version].

    PubMed

    Togari, Taisuke; Yamazaki, Yoshihiko; Koide, Syotaro; Miyata, Ayako

    2006-01-01

    In community and workplace health plans, the Perceived Health Competence Scale (PHCS) is employed as an index of health competency. The purpose of this research was to examine the reliability and validity of a modified Japanese PHCS. Interviews were sought with 3,000 randomly selected Japanese individuals using a two-step stratified method. Valid PHCS responses were obtained from 1,910 individuals, yielding a 63.7% response rate. Reliability was assessed using Cronbach's alpha coefficient (henceforth, alpha) to evaluate internal consistency, and by employing item-total correlation and alpha coefficient analyses to assess the effect of removal of variables from the model. To examine content validity, we assessed the correlation between the PHCS score and four respondent attribute characteristics, that is, sex, age, the presence of chronic disease, and the existence of chronic disease at age 18. The correlation between PHCS score and commonly employed healthy lifestyle indices was examined to assess construct validity. General linear model statistical analysis was employed. The modified Japanese PHCS demonstrated a satisfactory alpha coefficient of 0.869. Moreover, reliability was confirmed by item-total correlation and alpha coefficient analyses after removal of variables from the model. Differences in PHCS scores were seen between individuals 60 years and older, and younger individuals. These with current chronic disease, or who had had a chronic disease at age 18, tended to have lower PHCS scores. After controlling for the presence of current or age 18 chronic disease, age, and sex, significant correlations were seen between PHCS scores and tobacco use, dietary habits, and exercise, but not alcohol use or frequency of medical consultation. This study supports the reliability and validity, and hence supports the use, of the modified Japanese PHCS. Future longitudinal research is needed to evaluate the predictive power of modified Japanese PHCS scores, to examine factors influencing the development of perceived health competence, and to assess the effects of interventions on perceived health competence.

  16. Prediction of octanol-air partition coefficients for polychlorinated biphenyls (PCBs) using 3D-QSAR models.

    PubMed

    Chen, Ying; Cai, Xiaoyu; Jiang, Long; Li, Yu

    2016-02-01

    Based on the experimental data of octanol-air partition coefficients (KOA) for 19 polychlorinated biphenyl (PCB) congeners, two types of QSAR methods, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA), are used to establish 3D-QSAR models using the structural parameters as independent variables and using logKOA values as the dependent variable with the Sybyl software to predict the KOA values of the remaining 190 PCB congeners. The whole data set (19 compounds) was divided into a training set (15 compounds) for model generation and a test set (4 compounds) for model validation. As a result, the cross-validation correlation coefficient (q(2)) obtained by the CoMFA and CoMSIA models (shuffled 12 times) was in the range of 0.825-0.969 (>0.5), the correlation coefficient (r(2)) obtained was in the range of 0.957-1.000 (>0.9), and the SEP (standard error of prediction) of test set was within the range of 0.070-0.617, indicating that the models were robust and predictive. Randomly selected from a set of models, CoMFA analysis revealed that the corresponding percentages of the variance explained by steric and electrostatic fields were 23.9% and 76.1%, respectively, while CoMSIA analysis by steric, electrostatic and hydrophobic fields were 0.6%, 92.6%, and 6.8%, respectively. The electrostatic field was determined as a primary factor governing the logKOA. The correlation analysis of the relationship between the number of Cl atoms and the average logKOA values of PCBs indicated that logKOA values gradually increased as the number of Cl atoms increased. Simultaneously, related studies on PCB detection in the Arctic and Antarctic areas revealed that higher logKOA values indicate a stronger PCB migration ability. From CoMFA and CoMSIA contour maps, logKOA decreased when substituents possessed electropositive groups at the 2-, 3-, 3'-, 5- and 6- positions, which could reduce the PCB migration ability. These results are expected to be beneficial in predicting logKOA values of PCB homologues and derivatives and in providing a theoretical foundation for further elucidation of the global migration behaviour of PCBs. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Molecular Dynamics Simulations of Grain Boundary and Bulk Diffusion in Metals.

    NASA Astrophysics Data System (ADS)

    Plimpton, Steven James

    Diffusion is a microscopic mass transport mechanism that underlies many important macroscopic phenomena affecting the structural, electrical, and mechanical properties of metals. This thesis presents results from atomistic simulation studies of diffusion both in bulk and in the fast diffusion paths known as grain boundaries. Using the principles of molecular dynamics single boundaries are studied and their structure and dynamic properties characterized. In particular, tilt boundary bicrystal and bulk models of fcc Al and bcc alpha-Fe are simulated. Diffusion coefficients and activation energies for atomic motion are calculated for both models and compared to experimental data. The influence of the interatomic pair potential on the diffusion is studied in detail. A universal relation between the melting temperature that a pair potential induces in a simulated bulk model and the potential energy barrier height for atomic hopping is derived and used to correlate results for a wide variety of pair potentials. Using these techniques grain boundary and bulk diffusion coefficients for any fcc material can be estimated from simple static calculations without the need to perform more time-consuming dynamic simulations. The influences of two other factors on grain boundary diffusion are also studied because of the interest of the microelectronics industry in the diffusion related reliability problem known as electromigration. The first factor, known to affect the self diffusion rate of Al, is the presence of Cu impurity atoms in Al tilt boundaries. The bicrystal model for Al is seeded randomly with Cu atoms and a simple hybrid Morse potential used to model the Al-Cu interaction. While some effect due to the Cu is noted, it is concluded that pair potentials are likely an inadequate approximation for the alloy system. The second factor studied is the effect of the boundary orientation angle on the diffusion rate. Symmetric bcc Fe boundaries are relaxed to find optimal structures and their diffusion coefficients calculated. Good agreement is found with the dislocation pipe model for tilt boundary diffusion.

  18. Prediction of Short-Distance Aerial Movement of Phakopsora pachyrhizi Urediniospores Using Machine Learning.

    PubMed

    Wen, L; Bowen, C R; Hartman, G L

    2017-10-01

    Dispersal of urediniospores by wind is the primary means of spread for Phakopsora pachyrhizi, the cause of soybean rust. Our research focused on the short-distance movement of urediniospores from within the soybean canopy and up to 61 m from field-grown rust-infected soybean plants. Environmental variables were used to develop and compare models including the least absolute shrinkage and selection operator regression, zero-inflated Poisson/regular Poisson regression, random forest, and neural network to describe deposition of urediniospores collected in passive and active traps. All four models identified distance of trap from source, humidity, temperature, wind direction, and wind speed as the five most important variables influencing short-distance movement of urediniospores. The random forest model provided the best predictions, explaining 76.1 and 86.8% of the total variation in the passive- and active-trap datasets, respectively. The prediction accuracy based on the correlation coefficient (r) between predicted values and the true values were 0.83 (P < 0.0001) and 0.94 (P < 0.0001) for the passive and active trap datasets, respectively. Overall, multiple machine learning techniques identified the most important variables to make the most accurate predictions of movement of P. pachyrhizi urediniospores short-distance.

  19. Multilevel structural equation models for assessing moderation within and across levels of analysis.

    PubMed

    Preacher, Kristopher J; Zhang, Zhen; Zyphur, Michael J

    2016-06-01

    Social scientists are increasingly interested in multilevel hypotheses, data, and statistical models as well as moderation or interactions among predictors. The result is a focus on hypotheses and tests of multilevel moderation within and across levels of analysis. Unfortunately, existing approaches to multilevel moderation have a variety of shortcomings, including conflated effects across levels of analysis and bias due to using observed cluster averages instead of latent variables (i.e., "random intercepts") to represent higher-level constructs. To overcome these problems and elucidate the nature of multilevel moderation effects, we introduce a multilevel structural equation modeling (MSEM) logic that clarifies the nature of the problems with existing practices and remedies them with latent variable interactions. This remedy uses random coefficients and/or latent moderated structural equations (LMS) for unbiased tests of multilevel moderation. We describe our approach and provide an example using the publicly available High School and Beyond data with Mplus syntax in Appendix. Our MSEM method eliminates problems of conflated multilevel effects and reduces bias in parameter estimates while offering a coherent framework for conceptualizing and testing multilevel moderation effects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Simulation of Oil Slick Transport in Great Lakes Connecting Channels. Volume 1. Theory and Model Formulation

    DTIC Science & Technology

    1986-03-01

    attempted to analyze in detail the 3 hydrodynamic problem defined above (Kerr and Babu, 1970; 5 15 U DePietio and Cox, 1979; and Foda and Cox, 1980...coefficient ET can be related to the magnitude of V’ by the random walk analysis (Fischer, et al., 1979), whereI V’ = (4E T/6t)2 (27) 3 in which 5t = time...Water Management and Planning Branch.. Foda , M. and R.G. Cox, (1980). "The spreading of thin liquid films on a water-air interface," Journal of Fluid

  1. Data-driven parameterization of the generalized Langevin equation

    DOE PAGES

    Lei, Huan; Baker, Nathan A.; Li, Xiantao

    2016-11-29

    We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.

  2. Physics of ultra-high bioproductivity in algal photobioreactors

    NASA Astrophysics Data System (ADS)

    Greenwald, Efrat; Gordon, Jeffrey M.; Zarmi, Yair

    2012-04-01

    Cultivating algae at high densities in thin photobioreactors engenders time scales for random cell motion that approach photosynthetic rate-limiting time scales. This synchronization allows bioproductivity above that achieved with conventional strategies. We show that a diffusion model for cell motion (1) accounts for high bioproductivity at irradiance values previously deemed restricted by photoinhibition, (2) predicts the existence of optimal culture densities and their dependence on irradiance, consistent with available data, (3) accounts for the observed degree to which mixing improves bioproductivity, and (4) provides an estimate of effective cell diffusion coefficients, in accord with independent hydrodynamic estimates.

  3. Freak waves in random oceanic sea states.

    PubMed

    Onorato, M; Osborne, A R; Serio, M; Bertone, S

    2001-06-18

    Freak waves are very large, rare events in a random ocean wave train. Here we study their generation in a random sea state characterized by the Joint North Sea Wave Project spectrum. We assume, to cubic order in nonlinearity, that the wave dynamics are governed by the nonlinear Schrödinger (NLS) equation. We show from extensive numerical simulations of the NLS equation how freak waves in a random sea state are more likely to occur for large values of the Phillips parameter alpha and the enhancement coefficient gamma. Comparison with linear simulations is also reported.

  4. Predicting Daily Insolation with Hourly Cloud Height and Coverage.

    NASA Astrophysics Data System (ADS)

    Meyers, T. P.; Dale, R. F.

    1983-04-01

    Solar radiation information is used in crop growth, boundary layer, entomological and plant pathological models, and in determining the potential use of active and passive solar energy systems. Yet solar radiation is among the least measured meteorological variables.A semi-physical model based on standard meteorological data was developed to estimate solar radiation received at the earth's surface. The radiation model includes the effects of Rayleigh scattering, absorption by water vapor and permanent gases, and absorption and scattering by aerosols and clouds. Cloud attenuation is accounted for by assigning transmission coefficients based on cloud height and amount. The cloud transmission coefficients for various heights and coverages were derived empirically from hourly observations of solar radiation in conjunction with corresponding cloud observations at West Lafayette, Indiana. The model was tested with independent data from West Lafayette and Indianapolis, Madison, WI, Omaha, NE, Columbia, MO, Nashville, TN, Seattle, WA, Los Angeles, CA, Phoenix, AZ, Lake Charles, LA, Miami, FL, and Sterling, VA. For each of these locations a 16% random sample of days was drawn within each of the 12 months in a year for testing the model. Excellent agreement between predicted and observed radiation values was obtained for all stations tested. Mean absolute errors ranged from 1.05 to 1.80 MJ m2 day1 and root-mean-square errors ranged from 1.31 to 2.32 MJ m2 day1. The model's performance judged by relative error was found to be independent of season and cloud amount for all locations tested.

  5. Photon diffusion coefficient in scattering and absorbing media.

    PubMed

    Pierrat, Romain; Greffet, Jean-Jacques; Carminati, Rémi

    2006-05-01

    We present a unified derivation of the photon diffusion coefficient for both steady-state and time-dependent transport in disordered absorbing media. The derivation is based on a modal analysis of the time-dependent radiative transfer equation. This approach confirms that the dynamic diffusion coefficient is given by the random-walk result D = cl(*)/3, where l(*) is the transport mean free path and c is the energy velocity, independent of the level of absorption. It also shows that the diffusion coefficient for steady-state transport, often used in biomedical optics, depends on absorption, in agreement with recent theoretical and experimental works. These two results resolve a recurrent controversy in light propagation and imaging in scattering media.

  6. Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects.

    PubMed

    Dai, Tianjiao; Shete, Sanjay

    2016-08-30

    In a standard two-stage SMART design, the intermediate response to the first-stage intervention is measured at a fixed time point for all participants. Subsequently, responders and non-responders are re-randomized and the final outcome of interest is measured at the end of the study. To reduce the side effects and costs associated with first-stage interventions in a SMART design, we proposed a novel time-varying SMART design in which individuals are re-randomized to the second-stage interventions as soon as a pre-fixed intermediate response is observed. With this strategy, the duration of the first-stage intervention will vary. We developed a time-varying mixed effects model and a joint model that allows for modeling the outcomes of interest (intermediate and final) and the random durations of the first-stage interventions simultaneously. The joint model borrows strength from the survival sub-model in which the duration of the first-stage intervention (i.e., time to response to the first-stage intervention) is modeled. We performed a simulation study to evaluate the statistical properties of these models. Our simulation results showed that the two modeling approaches were both able to provide good estimations of the means of the final outcomes of all the embedded interventions in a SMART. However, the joint modeling approach was more accurate for estimating the coefficients of first-stage interventions and time of the intervention. We conclude that the joint modeling approach provides more accurate parameter estimates and a higher estimated coverage probability than the single time-varying mixed effects model, and we recommend the joint model for analyzing data generated from time-varying SMART designs. In addition, we showed that the proposed time-varying SMART design is cost-efficient and equally effective in selecting the optimal embedded adaptive intervention as the standard SMART design.

  7. Sequentially switching cell assemblies in random inhibitory networks of spiking neurons in the striatum.

    PubMed

    Ponzi, Adam; Wickens, Jeff

    2010-04-28

    The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.

  8. The value of comparative research in major day surgery.

    PubMed

    Llop-Gironés, Alba; Vergara-Duarte, Montse; Sánchez, Josep Anton; Tarafa, Gemma; Benach, Joan

    2017-05-19

    To measure time trends in major day surgery rates according to hospital ownership and other hospital characteristics among the providers of the public healthcare network of Catalonia, Spain. Data from the Statistics of Health Establishments providing Inpatient Care. A generalized linear mixed model with Gaussian response and random intercept and random slopes. The greatest growth in the rate of major day surgery was observed among private for-profit hospitals: 42.9 (SD: 22.5) in 2009 versus 2.7 (SD: 6.7) in 1996. These hospitals exhibited a significant increase in major day surgery compared to public hospitals (coefficient 2; p-value <0.01) CONCLUSIONS: The comparative evaluation of hospital performance is a decisive tool to ensure that public resources are used as rationally and efficiently as possible. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  9. Statistical signatures of a targeted search by bacteria

    NASA Astrophysics Data System (ADS)

    Jashnsaz, Hossein; Anderson, Gregory G.; Pressé, Steve

    2017-12-01

    Chemoattractant gradients are rarely well-controlled in nature and recent attention has turned to bacterial chemotaxis toward typical bacterial food sources such as food patches or even bacterial prey. In environments with localized food sources reminiscent of a bacterium’s natural habitat, striking phenomena—such as the volcano effect or banding—have been predicted or expected to emerge from chemotactic models. However, in practice, from limited bacterial trajectory data it is difficult to distinguish targeted searches from an untargeted search strategy for food sources. Here we use a theoretical model to identify statistical signatures of a targeted search toward point food sources, such as prey. Our model is constructed on the basis that bacteria use temporal comparisons to bias their random walk, exhibit finite memory and are subject to random (Brownian) motion as well as signaling noise. The advantage with using a stochastic model-based approach is that a stochastic model may be parametrized from individual stochastic bacterial trajectories but may then be used to generate a very large number of simulated trajectories to explore average behaviors obtained from stochastic search strategies. For example, our model predicts that a bacterium’s diffusion coefficient increases as it approaches the point source and that, in the presence of multiple sources, bacteria may take substantially longer to locate their first source giving the impression of an untargeted search strategy.

  10. The number of measurements needed to obtain high reliability for traits related to enzymatic activities and photosynthetic compounds in soybean plants infected with Phakopsora pachyrhizi.

    PubMed

    Oliveira, Tássia Boeno de; Azevedo Peixoto, Leonardo de; Teodoro, Paulo Eduardo; Alvarenga, Amauri Alves de; Bhering, Leonardo Lopes; Campo, Clara Beatriz Hoffmann

    2018-01-01

    Asian rust affects the physiology of soybean plants and causes losses in yield. Repeatability coefficients may help breeders to know how many measurements are needed to obtain a suitable reliability for a target trait. Therefore, the objectives of this study were to determine the repeatability coefficients of 14 traits in soybean plants inoculated with Phakopsora pachyrhizi and to establish the minimum number of measurements needed to predict the breeding value with high accuracy. Experiments were performed in a 3x2 factorial arrangement with three treatments and two inoculations in a random block design. Repeatability coefficients, coefficients of determination and number of measurements needed to obtain a certain reliability were estimated using ANOVA, principal component analysis based on the covariance matrix and the correlation matrix, structural analysis and mixed model. It was observed that the principal component analysis based on the covariance matrix out-performed other methods for almost all traits. Significant differences were observed for all traits except internal CO2 concentration for the treatment effects. For the measurement effects, all traits were significantly different. In addition, significant differences were found for all Treatment x Measurement interaction traits except coumestrol, chitinase and chlorophyll content. Six measurements were suitable to obtain a coefficient of determination higher than 0.7 for all traits based on principal component analysis. The information obtained from this research will help breeders and physiologists determine exactly how many measurements are needed to evaluate each trait in soybean plants infected by P. pachyrhizi with a desirable reliability.

  11. The number of measurements needed to obtain high reliability for traits related to enzymatic activities and photosynthetic compounds in soybean plants infected with Phakopsora pachyrhizi

    PubMed Central

    de Oliveira, Tássia Boeno; Teodoro, Paulo Eduardo; de Alvarenga, Amauri Alves; Bhering, Leonardo Lopes; Campo, Clara Beatriz Hoffmann

    2018-01-01

    Asian rust affects the physiology of soybean plants and causes losses in yield. Repeatability coefficients may help breeders to know how many measurements are needed to obtain a suitable reliability for a target trait. Therefore, the objectives of this study were to determine the repeatability coefficients of 14 traits in soybean plants inoculated with Phakopsora pachyrhizi and to establish the minimum number of measurements needed to predict the breeding value with high accuracy. Experiments were performed in a 3x2 factorial arrangement with three treatments and two inoculations in a random block design. Repeatability coefficients, coefficients of determination and number of measurements needed to obtain a certain reliability were estimated using ANOVA, principal component analysis based on the covariance matrix and the correlation matrix, structural analysis and mixed model. It was observed that the principal component analysis based on the covariance matrix out-performed other methods for almost all traits. Significant differences were observed for all traits except internal CO2 concentration for the treatment effects. For the measurement effects, all traits were significantly different. In addition, significant differences were found for all Treatment x Measurement interaction traits except coumestrol, chitinase and chlorophyll content. Six measurements were suitable to obtain a coefficient of determination higher than 0.7 for all traits based on principal component analysis. The information obtained from this research will help breeders and physiologists determine exactly how many measurements are needed to evaluate each trait in soybean plants infected by P. pachyrhizi with a desirable reliability. PMID:29438380

  12. Interval estimation and optimal design for the within-subject coefficient of variation for continuous and binary variables

    PubMed Central

    Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D

    2006-01-01

    Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943

  13. A panel multinomial logit analysis of elderly living arrangements: evidence from Aging In Manitoba longitudinal data, Canada.

    PubMed

    Sarma, Sisira; Simpson, Wayne

    2007-12-01

    Utilizing a unique longitudinal survey linked with home care use data, this paper analyzes the determinants of elderly living arrangements in Manitoba, Canada using a random effects multinomial logit model that accounts for unobserved individual heterogeneity. Because current home ownership is potentially endogenous in a living arrangements choice model, we use prior home ownership as an instrument. We also use prior home care use as an instrument for home care and use a random coefficient framework to account for unobserved health status. After controlling for relevant socio-demographic factors and accounting for unobserved individual heterogeneity, we find that home care and home ownership reduce the probability of living in a nursing home. Consistent with previous studies, we find that age is a strong predictor of nursing home entry. We also find that married people, those who have lived longer in the same community, and those who are healthy are more likely to live independently and less likely to be institutionalized or to cohabit with individuals other than their spouse.

  14. Do high-commitment work systems affect creativity? A multilevel combinational approach to employee creativity.

    PubMed

    Chang, Song; Jia, Liangding; Takeuchi, Riki; Cai, Yahua

    2014-07-01

    In this article, some information about the data used in the article and a citation were not included. The details of the corrections are provided.] This study uses 3-level, 2-wave time-lagged data from a random sample of 55 high-technology firms, 238 teams, and 1,059 individuals in China to investigate a multilevel combinational model of employee creativity. First, we hypothesize that firm (macrolevel) high-commitment work systems are conducive to individual (microlevel) creativity. Furthermore, we hypothesize that this positive crosslevel main impact may be combined with middle-level (mesolevel) factors, including team cohesion and team task complexity, such that the positive impact of firm high-commitment work systems on individual creativity is stronger when team cohesion is high and the team task more complex. The findings from random coefficient modeling analyses provide support for our hypotheses. These sets of results offer novel insight into how firms can use macrolevel and mesolevel contextual variables in a systematic manner to promote employee creativity in the workplace, despite its complex nature.

  15. Node Survival in Networks under Correlated Attacks

    PubMed Central

    Hao, Yan; Armbruster, Dieter; Hütt, Marc-Thorsten

    2015-01-01

    We study the interplay between correlations, dynamics, and networks for repeated attacks on a socio-economic network. As a model system we consider an insurance scheme against disasters that randomly hit nodes, where a node in need receives support from its network neighbors. The model is motivated by gift giving among the Maasai called Osotua. Survival of nodes under different disaster scenarios (uncorrelated, spatially, temporally and spatio-temporally correlated) and for different network architectures are studied with agent-based numerical simulations. We find that the survival rate of a node depends dramatically on the type of correlation of the disasters: Spatially and spatio-temporally correlated disasters increase the survival rate; purely temporally correlated disasters decrease it. The type of correlation also leads to strong inequality among the surviving nodes. We introduce the concept of disaster masking to explain some of the results of our simulations. We also analyze the subsets of the networks that were activated to provide support after fifty years of random disasters. They show qualitative differences for the different disaster scenarios measured by path length, degree, clustering coefficient, and number of cycles. PMID:25932635

  16. Fossils out of sequence: Computer simulations and strategies for dealing with stratigraphic disorder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cutler, A.H.; Flessa, K.W.

    Microstratigraphic resolution is limited by vertical mixing and reworking of fossils. Stratigraphic disorder is the degree to which fossils within a stratigraphic sequence are not in proper chronological order. Stratigraphic disorder arises through in situ vertical mixing of fossils and reworking of older fossils into younger deposits. The authors simulated the effects of mixing and reworking by simple computer models, and measured stratigraphic disorder using rank correlation between age and stratigraphic position (Spearman and Kendall coefficients). Mixing was simulated by randomly transposing pairs of adjacent fossils in a sequence. Reworking was simulated by randomly inserting older fossils into a youngermore » sequence. Mixing is an inefficient means of producing disorder; after 500 mixing steps stratigraphic order is still significant at the 99% to 95% level, depending on the coefficient used. Reworking disorders sequences very efficiently: significant order begins to be lost when reworked shells make up 35% of the sequence. Thus a sequence can be dominated by undisturbed, autochthonous shells and still be disordered. The effects of mixing-produced disorder can be minimized by increasing sample size at each horizon. Increased spacing between samples is of limited utility in dealing with disordered sequences: while widely separated samples are more likely to be stratigraphically ordered, the smaller number of samples makes the detection of trends problematic.« less

  17. EVOLUTION OF THE MAGNETIC FIELD LINE DIFFUSION COEFFICIENT AND NON-GAUSSIAN STATISTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snodin, A. P.; Ruffolo, D.; Matthaeus, W. H.

    The magnetic field line random walk (FLRW) plays an important role in the transport of energy and particles in turbulent plasmas. For magnetic fluctuations that are transverse or almost transverse to a large-scale mean magnetic field, theories describing the FLRW usually predict asymptotic diffusion of magnetic field lines perpendicular to the mean field. Such theories often depend on the assumption that one can relate the Lagrangian and Eulerian statistics of the magnetic field via Corrsin’s hypothesis, and additionally take the distribution of magnetic field line displacements to be Gaussian. Here we take an ordinary differential equation (ODE) model with thesemore » underlying assumptions and test how well it describes the evolution of the magnetic field line diffusion coefficient in 2D+slab magnetic turbulence, by comparisons to computer simulations that do not involve such assumptions. In addition, we directly test the accuracy of the Corrsin approximation to the Lagrangian correlation. Over much of the studied parameter space we find that the ODE model is in fairly good agreement with computer simulations, in terms of both the evolution and asymptotic values of the diffusion coefficient. When there is poor agreement, we show that this can be largely attributed to the failure of Corrsin’s hypothesis rather than the assumption of Gaussian statistics of field line displacements. The degree of non-Gaussianity, which we measure in terms of the kurtosis, appears to be an indicator of how well Corrsin’s approximation works.« less

  18. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol lowering drugs

    PubMed Central

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin

    2013-01-01

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data (IPD) in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the Deviance Information Criterion (DIC) is used to select the best transformation model. Since the model is quite complex, a novel Monte Carlo Markov chain (MCMC) sampling scheme is developed to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol lowering drugs where the goal is to jointly model the three dimensional response consisting of Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). Since the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately: however, a multivariate approach would be more appropriate since these variables are correlated with each other. A detailed analysis of these data is carried out using the proposed methodology. PMID:23580436

  19. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol-lowering drugs.

    PubMed

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G; Shah, Arvind K; Lin, Jianxin

    2013-10-15

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the deviance information criterion is used to select the best transformation model. Because the model is quite complex, we develop a novel Monte Carlo Markov chain sampling scheme to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol-lowering drugs where the goal is to jointly model the three-dimensional response consisting of low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), and triglycerides (TG) (LDL-C, HDL-C, TG). Because the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately; however, a multivariate approach would be more appropriate because these variables are correlated with each other. We carry out a detailed analysis of these data by using the proposed methodology. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Measuring monotony in two-dimensional samples

    NASA Astrophysics Data System (ADS)

    Kachapova, Farida; Kachapov, Ilias

    2010-04-01

    This note introduces a monotony coefficient as a new measure of the monotone dependence in a two-dimensional sample. Some properties of this measure are derived. In particular, it is shown that the absolute value of the monotony coefficient for a two-dimensional sample is between |r| and 1, where r is the Pearson's correlation coefficient for the sample; that the monotony coefficient equals 1 for any monotone increasing sample and equals -1 for any monotone decreasing sample. This article contains a few examples demonstrating that the monotony coefficient is a more accurate measure of the degree of monotone dependence for a non-linear relationship than the Pearson's, Spearman's and Kendall's correlation coefficients. The monotony coefficient is a tool that can be applied to samples in order to find dependencies between random variables; it is especially useful in finding couples of dependent variables in a big dataset of many variables. Undergraduate students in mathematics and science would benefit from learning and applying this measure of monotone dependence.

  1. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com

    Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less

  2. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta‐analysis and group level studies

    PubMed Central

    Bakbergenuly, Ilyas; Morgenthaler, Stephan

    2016-01-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062

  3. Evaluating the Effectiveness of an Antimicrobial Stewardship Program on Reducing the Incidence Rate of Healthcare-Associated Clostridium difficile Infection: A Non-Randomized, Stepped Wedge, Single-Site, Observational Study.

    PubMed

    DiDiodato, Giulio; McArthur, Leslie

    2016-01-01

    The incidence rate of healthcare-associated Clostridium difficile infection (HA-CDI) is estimated at 1 in 100 patients. Antibiotic exposure is the most consistently reported risk factor for HA-CDI. Strategies to reduce the risk of HA-CDI have focused on reducing antibiotic utilization. Prospective audit and feedback is a commonly used antimicrobial stewardship intervention (ASi). The impact of this ASi on risk of HA-CDI is equivocal. This study examines the effectiveness of a prospective audit and feedback ASi on reducing the risk of HA-CDI. Single-site, 339 bed community-hospital in Barrie, Ontario, Canada. Primary outcome is HA-CDI incidence rate. Daily prospective and audit ASi is the exposure variable. ASi implemented across 6 wards in a non-randomized, stepped wedge design. Criteria for ASi; any intravenous antibiotic use for ≥ 48 hrs, any oral fluoroquinolone or oral second generation cephalosporin use for ≥ 48 hrs, or any antimicrobial use for ≥ 5 days. HA-CDI cases and model covariates were aggregated by ward, year and month starting September 2008 and ending February 2016. Multi-level mixed effect negative binomial regression analysis was used to model the primary outcome, with intercept and slope coefficients for ward-level random effects estimated. Other covariates tested for inclusion in the final model were derived from previously published risk factors. Deviance residuals were used to assess the model's goodness-of-fit. The dataset included 486 observation periods, of which 350 were control periods and 136 were intervention periods. After accounting for all other model covariates, the estimated overall ASi incidence rate ratio (IRR) was 0.48 (95% 0.30, 0.79). The ASi effect was independent of antimicrobial utilization. The ASi did not seem to reduce the risk of Clostridium difficile infection on the surgery wards (IRR 0.87, 95% CI 0.45, 1.69) compared to the medicine wards (IRR 0.42, 95% CI 0.28, 0.63). The ward-level burden of Clostridium difficile as measured by the ward's previous month's total CDI cases (CDI Lag) and the ward's current month's community-associated CDI cases (CA-CDI) was significantly associated with an increased risk of HA-CDI, with the estimated CDI Lag IRR of 1.21 (95% 1.15, 1.28) and the estimated CA-CDI IRR of 1.10 (95% CI 1.01, 1.20). The ward-level random intercept and slope coefficients were not significant. The final model demonstrated good fit. In this study, a daily prospective audit and feedback ASi resulted in a significant reduction in the risk of HA-CDI on the medicine wards, however, this effect was independent of an overall reduction in antibiotic utilization. In addition, the ward-level burden of Clostridium difficile was shown to significantly increase the risk of HA-CDI, reinforcing the importance of the environment as a source of HA-CDI.

  4. Tortuosity Computations of Porous Materials using the Direct Simulation Monte Carlo

    NASA Technical Reports Server (NTRS)

    Borner, A.; Ferguson, C.; Panerai, F.; Mansour, Nagi N.

    2017-01-01

    Low-density carbon fiber preforms, used as thermal protection systems (TPS) materials for planetary entry systems, have permeable, highly porous microstructures consisting of interlaced fibers. Internal gas transport in TPS is important in modeling the penetration of hot boundary-layer gases and the in-depth transport of pyrolysis and ablation products. The gas effective diffusion coefficient of a porous material must be known before the gas transport can be modeled in material response solvers; however, there are very little available data for rigid fibrous insulators used in heritage TPS.The tortuosity factor, which reflects the efficiency of the percolation paths, can be computed from the effective diffusion coefficient of a gas inside a porous material and is based on the micro-structure of the material. It is well known, that the tortuosity factor is a strong function of the Knudsen number. Due to the small characteristic scales of porous media used in TPS applications (typical pore size of the order of 50 micron), the transport of gases can occur in the rarefied and transitional regimes, at Knudsen numbers above 1. A proper way to model the gas dynamics at these conditions consists in solving the Boltzmann equation using particle-based methods that account for movement and collisions of atoms and molecules.In this work we adopt, for the first time, the Direct Simulation Monte Carlo (DSMC) method to compute the tortuosity factor of fibrous media in the rarefied regime. To enable realistic simulations of the actual transport of gases in the porous medium, digitized computational grids are obtained from X-ray micro-tomography imaging of real TPS materials. The SPARTA DSMC solver is used for simulations. Effective diffusion coefficients and tortuosity factors are obtained by computing the mean-square displacement of diffusing particles.We first apply the method to compute the tortuosity factors as a function of the Knudsen number for computationally designed materials such as random cylindrical fibers and packed bed of spheres with prescribed porosity. Results are compared to literature values obtained using random walk methods in the rarefied and transitional regime and a finite-volume method for the continuum regime. We then compute tortuosity factors for a real carbon fiber material with a transverse isotropic structure (FiberForm), quantifying differences between through-thickness and in-plain tortuosities at various Knudsen regimes.

  5. Accounting for autocorrelation in multi-drug resistant tuberculosis predictors using a set of parsimonious orthogonal eigenvectors aggregated in geographic space.

    PubMed

    Jacob, Benjamin J; Krapp, Fiorella; Ponce, Mario; Gottuzzo, Eduardo; Griffith, Daniel A; Novak, Robert J

    2010-05-01

    Spatial autocorrelation is problematic for classical hierarchical cluster detection tests commonly used in multi-drug resistant tuberculosis (MDR-TB) analyses as considerable random error can occur. Therefore, when MDRTB clusters are spatially autocorrelated the assumption that the clusters are independently random is invalid. In this research, a product moment correlation coefficient (i.e., the Moran's coefficient) was used to quantify local spatial variation in multiple clinical and environmental predictor variables sampled in San Juan de Lurigancho, Lima, Peru. Initially, QuickBird 0.61 m data, encompassing visible bands and the near infra-red bands, were selected to synthesize images of land cover attributes of the study site. Data of residential addresses of individual patients with smear-positive MDR-TB were geocoded, prevalence rates calculated and then digitally overlaid onto the satellite data within a 2 km buffer of 31 georeferenced health centers, using a 10 m2 grid-based algorithm. Geographical information system (GIS)-gridded measurements of each health center were generated based on preliminary base maps of the georeferenced data aggregated to block groups and census tracts within each buffered area. A three-dimensional model of the study site was constructed based on a digital elevation model (DEM) to determine terrain covariates associated with the sampled MDR-TB covariates. Pearson's correlation was used to evaluate the linear relationship between the DEM and the sampled MDR-TB data. A SAS/GIS(R) module was then used to calculate univariate statistics and to perform linear and non-linear regression analyses using the sampled predictor variables. The estimates generated from a global autocorrelation analyses were then spatially decomposed into empirical orthogonal bases using a negative binomial regression with a non-homogeneous mean. Results of the DEM analyses indicated a statistically non-significant, linear relationship between georeferenced health centers and the sampled covariate elevation. The data exhibited positive spatial autocorrelation and the decomposition of Moran's coefficient into uncorrelated, orthogonal map pattern components revealed global spatial heterogeneities necessary to capture latent autocorrelation in the MDR-TB model. It was thus shown that Poisson regression analyses and spatial eigenvector mapping can elucidate the mechanics of MDR-TB transmission by prioritizing clinical and environmental-sampled predictor variables for identifying high risk populations.

  6. Proliferative changes in the bronchial epithelium of former smokers treated with retinoids.

    PubMed

    Hittelman, Walter N; Liu, Diane D; Kurie, Jonathan M; Lotan, Reuben; Lee, Jin Soo; Khuri, Fadlo; Ibarguen, Heladio; Morice, Rodolfo C; Walsh, Garrett; Roth, Jack A; Minna, John; Ro, Jae Y; Broxson, Anita; Hong, Waun Ki; Lee, J Jack

    2007-11-07

    Retinoids have shown antiproliferative and chemopreventive activity. We analyzed data from a randomized, placebo-controlled chemoprevention trial to determine whether a 3-month treatment with either 9-cis-retinoic acid (RA) or 13-cis-RA and alpha-tocopherol reduced Ki-67, a proliferation biomarker, in the bronchial epithelium. Former smokers (n = 225) were randomly assigned to receive 3 months of daily oral 9-cis-RA (100 mg), 13-cis-RA (1 mg/kg) and alpha-tocopherol (1200 IU), or placebo. Bronchoscopic biopsy specimens obtained before and after treatment were immunohistochemically assessed for changes in the Ki-67 proliferative index (i.e., percentage of cells with Ki-67-positive nuclear staining) in the basal and parabasal layers of the bronchial epithelium. Per-subject and per-biopsy site analyses were conducted. Multicovariable analyses, including a mixed-effects model and a generalized estimating equations model, were used to investigate the treatment effect (Ki-67 labeling index and percentage of bronchial epithelial biopsy sites with a Ki-67 index > or = 5%) with adjustment for multiple covariates, such as smoking history and metaplasia. Coefficient estimates and 95% confidence intervals (CIs) were obtained from the models. All statistical tests were two-sided. In per-subject analyses, Ki-67 labeling in the basal layer was not changed by any treatment; the percentage of subjects with a high Ki-67 labeling in the parabasal layer dropped statistically significantly after treatment with 13-cis-RA and alpha-tocopherol treatment (P = .04) compared with placebo, but the drop was not statistically significant after 9-cis-RA treatment (P = .17). A similar effect was observed in the parabasal layer in a per-site analysis; the percentage of sites with high Ki-67 labeling dropped statistically significantly after 9-cis-RA treatment (coefficient estimate = -0.72, 95% CI = -1.24 to -0.20; P = .007) compared with placebo, and after 13-cis-RA and alpha-tocopherol treatment (coefficient estimate = -0.66, 95% CI = -1.15 to -0.17; P = .008). In per-subject analyses, treatment with 13-cis-RA and alpha-tocopherol, compared with placebo, was statistically significantly associated with reduced bronchial epithelial cell proliferation; treatment with 9-cis-RA was not. In per-site analyses, statistically significant associations were obtained with both treatments.

  7. Proliferative Changes in the Bronchial Epithelium of Former Smokers Treated With Retinoids

    PubMed Central

    Hittelman, Walter N.; Liu, Diane D.; Kurie, Jonathan M.; Lotan, Reuben; Lee, Jin Soo; Khuri, Fadlo; Ibarguen, Heladio; Morice, Rodolfo C.; Walsh, Garrett; Roth, Jack A.; Minna, John; Ro, Jae Y.; Broxson, Anita; Hong, Waun Ki; Lee, J. Jack

    2012-01-01

    Background Retinoids have shown antiproliferative and chemopreventive activity. We analyzed data from a randomized, placebo-controlled chemoprevention trial to determine whether a 3-month treatment with either 9-cis-retinoic acid (RA) or 13-cis-RA and α-tocopherol reduced Ki-67, a proliferation biomarker, in the bronchial epithelium. Methods Former smokers (n = 225) were randomly assigned to receive 3 months of daily oral 9-cis-RA (100 mg), 13-cis-RA (1 mg/kg) and α-tocopherol (1200 IU), or placebo. Bronchoscopic biopsy specimens obtained before and after treatment were immunohistochemically assessed for changes in the Ki-67 proliferative index (i.e., percentage of cells with Ki-67–positive nuclear staining) in the basal and parabasal layers of the bronchial epithelium. Per-subject and per–biopsy site analyses were conducted. Multicovariable analyses, including a mixed-effects model and a generalized estimating equations model, were used to investigate the treatment effect (Ki-67 labeling index and percentage of bronchial epithelial biopsy sites with a Ki-67 index ≥ 5%) with adjustment for multiple covariates, such as smoking history and metaplasia. Coefficient estimates and 95% confidence intervals (CIs) were obtained from the models. All statistical tests were two-sided. Results In per-subject analyses, Ki-67 labeling in the basal layer was not changed by any treatment; the percentage of subjects with a high Ki-67 labeling in the parabasal layer dropped statistically significantly after treatment with 13-cis-RA and α-tocopherol treatment (P = .04) compared with placebo, but the drop was not statistically significant after 9-cis-RA treatment (P = .17). A similar effect was observed in the parabasal layer in a per-site analysis; the percentage of sites with high Ki-67 labeling dropped statistically significantly after 9-cis-RA treatment (coefficient estimate = −0.72, 95% CI = −1.24 to −0.20; P = .007) compared with placebo, and after 13-cis-RA and α-tocopherol treatment (coefficient estimate = −0.66, 95% CI = −1.15 to −0.17; P = .008). Conclusions In per-subject analyses, treatment with 13-cis-RA and α-tocopherol, compared with placebo, was statistically significantly associated with reduced bronchial epithelial cell proliferation; treatment with 9-cis-RA was not. In per-site analyses, statistically significant associations were obtained with both treatments. PMID:17971525

  8. Giant mesoscopic fluctuations of the elastic cotunneling thermopower of a single-electron transistor

    NASA Astrophysics Data System (ADS)

    Vasenko, A. S.; Basko, D. M.; Hekking, F. W. J.

    2015-02-01

    We study the thermoelectric transport of a small metallic island weakly coupled to two electrodes by tunnel junctions. In the Coulomb blockade regime, in the case when the ground state of the system corresponds to an even number of electrons on the island, the main mechanism of electron transport at the lowest temperatures is elastic cotunneling. In this regime, the transport coefficients strongly depend on the realization of the random impurity potential or the shape of the island. Using random-matrix theory, we calculate the thermopower and the thermoelectric kinetic coefficient and study the statistics of their mesoscopic fluctuations in the elastic cotunneling regime. The fluctuations of the thermopower turn out to be much larger than the average value.

  9. Time domain simulation of the response of geometrically nonlinear panels subjected to random loading

    NASA Technical Reports Server (NTRS)

    Moyer, E. Thomas, Jr.

    1988-01-01

    The response of composite panels subjected to random pressure loads large enough to cause geometrically nonlinear responses is studied. A time domain simulation is employed to solve the equations of motion. An adaptive time stepping algorithm is employed to minimize intermittent transients. A modified algorithm for the prediction of response spectral density is presented which predicts smooth spectral peaks for discrete time histories. Results are presented for a number of input pressure levels and damping coefficients. Response distributions are calculated and compared with the analytical solution of the Fokker-Planck equations. RMS response is reported as a function of input pressure level and damping coefficient. Spectral densities are calculated for a number of examples.

  10. Comparing spatially varying coefficient models: a case study examining violent crime rates and their relationships to alcohol outlets and illegal drug arrests

    NASA Astrophysics Data System (ADS)

    Wheeler, David C.; Waller, Lance A.

    2009-03-01

    In this paper, we compare and contrast a Bayesian spatially varying coefficient process (SVCP) model with a geographically weighted regression (GWR) model for the estimation of the potentially spatially varying regression effects of alcohol outlets and illegal drug activity on violent crime in Houston, Texas. In addition, we focus on the inherent coefficient shrinkage properties of the Bayesian SVCP model as a way to address increased coefficient variance that follows from collinearity in GWR models. We outline the advantages of the Bayesian model in terms of reducing inflated coefficient variance, enhanced model flexibility, and more formal measuring of model uncertainty for prediction. We find spatially varying effects for alcohol outlets and drug violations, but the amount of variation depends on the type of model used. For the Bayesian model, this variation is controllable through the amount of prior influence placed on the variance of the coefficients. For example, the spatial pattern of coefficients is similar for the GWR and Bayesian models when a relatively large prior variance is used in the Bayesian model.

  11. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  12. Investigation of advanced UQ for CRUD prediction with VIPRE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldred, Michael Scott

    2011-09-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less

  13. A framework to quantify uncertainties of seafloor backscatter from swath mapping echosounders

    NASA Astrophysics Data System (ADS)

    Malik, Mashkoor; Lurton, Xavier; Mayer, Larry

    2018-06-01

    Multibeam echosounders (MBES) have become a widely used acoustic remote sensing tool to map and study the seafloor, providing co-located bathymetry and seafloor backscatter. Although the uncertainty associated with MBES-derived bathymetric data has been studied extensively, the question of backscatter uncertainty has been addressed only minimally and hinders the quantitative use of MBES seafloor backscatter. This paper explores approaches to identifying uncertainty sources associated with MBES-derived backscatter measurements. The major sources of uncertainty are catalogued and the magnitudes of their relative contributions to the backscatter uncertainty budget are evaluated. These major uncertainty sources include seafloor insonified area (1-3 dB), absorption coefficient (up to > 6 dB), random fluctuations in echo level (5.5 dB for a Rayleigh distribution), and sonar calibration (device dependent). The magnitudes of these uncertainty sources vary based on how these effects are compensated for during data acquisition and processing. Various cases (no compensation, partial compensation and full compensation) for seafloor insonified area, transmission losses and random fluctuations were modeled to estimate their uncertainties in different scenarios. Uncertainty related to the seafloor insonified area can be reduced significantly by accounting for seafloor slope during backscatter processing while transmission losses can be constrained by collecting full water column absorption coefficient profiles (temperature and salinity profiles). To reduce random fluctuations to below 1 dB, at least 20 samples are recommended to be used while computing mean values. The estimation of uncertainty in backscatter measurements is constrained by the fact that not all instrumental components are characterized and documented sufficiently for commercially available MBES. Further involvement from manufacturers in providing this essential information is critically required.

  14. Effects of 3 dimensional crystal geometry and orientation on 1D and 2D time-scale determinations of magmatic processes using olivine and orthopyroxene

    NASA Astrophysics Data System (ADS)

    Shea, Thomas; Krimer, Daniel; Costa, Fidel; Hammer, Julia

    2014-05-01

    One of the achievements in recent years in volcanology is the determination of time-scales of magmatic processes via diffusion in minerals and its addition to the petrologists' and volcanologists' toolbox. The method typically requires one-dimensional modeling of randomly cut crystals from two-dimensional thin sections. Here we address the question whether using 1D (traverse) or 2D (surface) datasets exploited from randomly cut 3D crystals introduces a bias or dispersion in the time-scales estimated, and how this error can be improved or eliminated. Computational simulations were performed using a concentration-dependent, finite-difference solution to the diffusion equation in 3D. The starting numerical models involved simple geometries (spheres, parallelepipeds), Mg/Fe zoning patterns (either normal or reverse), and isotropic diffusion coefficients. Subsequent models progressively incorporated more complexity, 3D olivines possessing representative polyhedral morphologies, diffusion anisotropy along the different crystallographic axes, and more intricate core-rim zoning patterns. Sections and profiles used to compare 1, 2 and 3D diffusion models were selected to be (1) parallel to the crystal axes, (2) randomly oriented but passing through the olivine center, or (3) randomly oriented and sectioned. Results show that time-scales estimated on randomly cut traverses (1D) or surfaces (2D) can be widely distributed around the actual durations of 3D diffusion (~0.2 to 10 times the true diffusion time). The magnitude over- or underestimations of duration are a complex combination of the geometry of the crystal, the zoning pattern, the orientation of the cuts with respect to the crystallographic axes, and the degree of diffusion anisotropy. Errors on estimated time-scales retrieved from such models may thus be significant. Drastic reductions in the uncertainty of calculated diffusion times can be obtained by following some simple guidelines during the course of data collection (i.e. selection of crystals and concentration profiles, acquisition of crystallographic orientation data), thus allowing derivation of robust time-scales.

  15. Molecular radiotherapy: the NUKFIT software for calculating the time-integrated activity coefficient.

    PubMed

    Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G

    2013-10-01

    Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.

  16. Modelling Nonlinear Dynamic Textures using Hybrid DWT-DCT and Kernel PCA with GPU

    NASA Astrophysics Data System (ADS)

    Ghadekar, Premanand Pralhad; Chopade, Nilkanth Bhikaji

    2016-12-01

    Most of the real-world dynamic textures are nonlinear, non-stationary, and irregular. Nonlinear motion also has some repetition of motion, but it exhibits high variation, stochasticity, and randomness. Hybrid DWT-DCT and Kernel Principal Component Analysis (KPCA) with YCbCr/YIQ colour coding using the Dynamic Texture Unit (DTU) approach is proposed to model a nonlinear dynamic texture, which provides better results than state-of-art methods in terms of PSNR, compression ratio, model coefficients, and model size. Dynamic texture is decomposed into DTUs as they help to extract temporal self-similarity. Hybrid DWT-DCT is used to extract spatial redundancy. YCbCr/YIQ colour encoding is performed to capture chromatic correlation. KPCA is applied to capture nonlinear motion. Further, the proposed algorithm is implemented on Graphics Processing Unit (GPU), which comprise of hundreds of small processors to decrease time complexity and to achieve parallelism.

  17. Bayesian hierarchical models for smoothing in two-phase studies, with application to small area estimation.

    PubMed

    Ross, Michelle; Wakefield, Jon

    2015-10-01

    Two-phase study designs are appealing since they allow for the oversampling of rare sub-populations which improves efficiency. In this paper we describe a Bayesian hierarchical model for the analysis of two-phase data. Such a model is particularly appealing in a spatial setting in which random effects are introduced to model between-area variability. In such a situation, one may be interested in estimating regression coefficients or, in the context of small area estimation, in reconstructing the population totals by strata. The efficiency gains of the two-phase sampling scheme are compared to standard approaches using 2011 birth data from the research triangle area of North Carolina. We show that the proposed method can overcome small sample difficulties and improve on existing techniques. We conclude that the two-phase design is an attractive approach for small area estimation.

  18. Can we get some cooperation around here? The mediating role of group norms on the relationship between team personality and individual helping behaviors.

    PubMed

    Gonzalez-Mulé, Erik; DeGeest, David S; McCormick, Brian W; Seong, Jee Young; Brown, Kenneth G

    2014-09-01

    Drawing on the group-norms theory of organizational citizenship behaviors and person-environment fit theory, we introduce and test a multilevel model of the effects of additive and dispersion composition models of team members' personality characteristics on group norms and individual helping behaviors. Our model was tested using regression and random coefficients modeling on 102 research and development teams. Results indicated that high mean levels of extraversion are positively related to individual helping behaviors through the mediating effect of cooperative group norms. Further, low variance on agreeableness (supplementary fit) and high variance on extraversion (complementary fit) promote the enactment of individual helping behaviors, but only the effects of extraversion were mediated by cooperative group norms. Implications of these findings for theories of helping behaviors in teams are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  19. Modelling students' knowledge organisation: Genealogical conceptual networks

    NASA Astrophysics Data System (ADS)

    Koponen, Ismo T.; Nousiainen, Maija

    2018-04-01

    Learning scientific knowledge is largely based on understanding what are its key concepts and how they are related. The relational structure of concepts also affects how concepts are introduced in teaching scientific knowledge. We model here how students organise their knowledge when they represent their understanding of how physics concepts are related. The model is based on assumptions that students use simple basic linking-motifs in introducing new concepts and mostly relate them to concepts that were introduced a few steps earlier, i.e. following a genealogical ordering. The resulting genealogical networks have relatively high local clustering coefficients of nodes but otherwise resemble networks obtained with an identical degree distribution of nodes but with random linking between them (i.e. the configuration-model). However, a few key nodes having a special structural role emerge and these nodes have a higher than average communicability betweenness centralities. These features agree with the empirically found properties of students' concept networks.

  20. Predicting enteric methane emission of dairy cows with milk Fourier-transform infrared spectra and gas chromatography-based milk fatty acid profiles.

    PubMed

    van Gastelen, S; Mollenhorst, H; Antunes-Fernandes, E C; Hettinga, K A; van Burgsteden, G G; Dijkstra, J; Rademaker, J L W

    2018-06-01

    The objective of the present study was to compare the prediction potential of milk Fourier-transform infrared spectroscopy (FTIR) for CH 4 emissions of dairy cows with that of gas chromatography (GC)-based milk fatty acids (MFA). Data from 9 experiments with lactating Holstein-Friesian cows, with a total of 30 dietary treatments and 218 observations, were used. Methane emissions were measured for 3 consecutive days in climate respiration chambers and expressed as production (g/d), yield (g/kg of dry matter intake; DMI), and intensity (g/kg of fat- and protein-corrected milk; FPCM). Dry matter intake was 16.3 ± 2.18 kg/d (mean ± standard deviation), FPCM yield was 25.9 ± 5.06 kg/d, CH 4 production was 366 ± 53.9 g/d, CH 4 yield was 22.5 ± 2.10 g/kg of DMI, and CH 4 intensity was 14.4 ± 2.58 g/kg of FPCM. Milk was sampled during the same days and analyzed by GC and by FTIR. Multivariate GC-determined MFA-based and FTIR-based CH 4 prediction models were developed, and subsequently, the final CH 4 prediction models were evaluated with root mean squared error of prediction and concordance correlation coefficient analysis. Further, we performed a random 10-fold cross validation to calculate the performance parameters of the models (e.g., the coefficient of determination of cross validation). The final GC-determined MFA-based CH 4 prediction models estimate CH 4 production, yield, and intensity with a root mean squared error of prediction of 35.7 g/d, 1.6 g/kg of DMI, and 1.6 g/kg of FPCM and with a concordance correlation coefficient of 0.72, 0.59, and 0.77, respectively. The final FTIR-based CH 4 prediction models estimate CH 4 production, yield, and intensity with a root mean squared error of prediction of 43.2 g/d, 1.9 g/kg of DMI, and 1.7 g/kg of FPCM and with a concordance correlation coefficient of 0.52, 0.40, and 0.72, respectively. The GC-determined MFA-based prediction models described a greater part of the observed variation in CH 4 emission than did the FTIR-based models. The cross validation results indicate that all CH 4 prediction models (both GC-determined MFA-based and FTIR-based models) are robust; the difference between the coefficient of determination and the coefficient of determination of cross validation ranged from 0.01 to 0.07. The results indicate that GC-determined MFA have a greater potential than FTIR spectra to estimate CH 4 production, yield, and intensity. Both techniques hold potential but may not yet be ready to predict CH 4 emission of dairy cows in practice. Additional CH 4 measurements are needed to improve the accuracy and robustness of GC-determined MFA and FTIR spectra for CH 4 prediction. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

Top