NASA Astrophysics Data System (ADS)
Indarsih, Indrati, Ch. Rini
2016-02-01
In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.
A Structural Modeling Approach to a Multilevel Random Coefficients Model.
ERIC Educational Resources Information Center
Rovine, Michael J.; Molenaar, Peter C. M.
2000-01-01
Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)
ERIC Educational Resources Information Center
Bloom, Howard S.; Raudenbush, Stephen W.; Weiss, Michael J.; Porter, Kristin
2017-01-01
The present article considers a fundamental question in evaluation research: "By how much do program effects vary across sites?" The article first presents a theoretical model of cross-site impact variation and a related estimation model with a random treatment coefficient and fixed site-specific intercepts. This approach eliminates…
An instrumental variable random-coefficients model for binary outcomes
Chesher, Andrew; Rosen, Adam M
2014-01-01
In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. PMID:25798048
Measuring multivariate association and beyond
Josse, Julie; Holmes, Susan
2017-01-01
Simple correlation coefficients between two variables have been generalized to measure association between two matrices in many ways. Coefficients such as the RV coefficient, the distance covariance (dCov) coefficient and kernel based coefficients are being used by different research communities. Scientists use these coefficients to test whether two random vectors are linked. Once it has been ascertained that there is such association through testing, then a next step, often ignored, is to explore and uncover the association’s underlying patterns. This article provides a survey of various measures of dependence between random vectors and tests of independence and emphasizes the connections and differences between the various approaches. After providing definitions of the coefficients and associated tests, we present the recent improvements that enhance their statistical properties and ease of interpretation. We summarize multi-table approaches and provide scenarii where the indices can provide useful summaries of heterogeneous multi-block data. We illustrate these different strategies on several examples of real data and suggest directions for future research. PMID:29081877
Spielman, Stephanie J; Wilke, Claus O
2016-11-01
The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Random effects coefficient of determination for mixed and meta-analysis models
Demidenko, Eugene; Sargent, James; Onega, Tracy
2011-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070
Random effects coefficient of determination for mixed and meta-analysis models.
Demidenko, Eugene; Sargent, James; Onega, Tracy
2012-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.
Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach
ERIC Educational Resources Information Center
Rotondi, Michael A.; Donner, Allan
2009-01-01
The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…
Random attractor of non-autonomous stochastic Boussinesq lattice system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Min, E-mail: zhaomin1223@126.com; Zhou, Shengfan, E-mail: zhoushengfan@yahoo.com
2015-09-15
In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-09-04
In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numericalmore » experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan
2016-04-01
Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.
Interaction between photons and leaf canopies
NASA Technical Reports Server (NTRS)
Knyazikhin, Yuri V.; Marshak, Alexander L.; Myneni, Ranga B.
1991-01-01
The physics of neutral particle interaction for photons traveling in media consisting of finite-dimensional scattering centers that cross-shade mutually is investigated. A leaf canopy is a typical example of such media. The leaf canopy is idealized as a binary medium consisting of randomly distributed gaps (voids) and regions with phytoelements (turbid phytomedium). In this approach, the leaf canopy is represented by a combination of all possible open oriented spheres. The mathematical approach for characterizing the structure of the host medium is considered. The extinction coefficient at any phase-space location in a leaf canopy is the product of the extinction coefficient in the turbid phytomedium and the probability of absence gaps at that location. Using a similar approach, an expression for the differential scattering coefficient is derived.
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
Tests of Hypotheses Arising In the Correlated Random Coefficient Model*
Heckman, James J.; Schmierer, Daniel
2010-01-01
This paper examines the correlated random coefficient model. It extends the analysis of Swamy (1971), who pioneered the uncorrelated random coefficient model in economics. We develop the properties of the correlated random coefficient model and derive a new representation of the variance of the instrumental variable estimator for that model. We develop tests of the validity of the correlated random coefficient model against the null hypothesis of the uncorrelated random coefficient model. PMID:21170148
NASA Technical Reports Server (NTRS)
Odonnell, M.; Miller, J. G.
1981-01-01
The use of a broadband backscatter technique to obtain the frequency dependence of the longitudinal-wave ultrasonic backscatter coefficient from a collection of scatterers in a solid is investigated. Measurements of the backscatter coefficient were obtained over the range of ultrasonic wave vector magnitude-glass sphere radius product between 0.1 and 3.0 from model systems consisting of dilute suspensions of randomly distributed crown glass spheres in hardened polyester resin. The results of these measurements were in good agreement with theoretical prediction. Consequently, broadband measurements of the ultrasonic backscatter coefficient may represent a useful approach toward characterizing the physical properties of scatterers in intrinsically inhomogeneous materials such as composites, metals, and ceramics, and may represent an approach toward nondestructive evaluation of these materials.
NASA Astrophysics Data System (ADS)
Suciu, N.; Vamos, C.; Vereecken, H.; Vanderborght, J.; Hardelauf, H.
2003-04-01
When the small scale transport is modeled by a Wiener process and the large scale heterogeneity by a random velocity field, the effective coefficients, Deff, can be decomposed as sums between the local coefficient, D, a contribution of the random advection, Dadv, and a contribution of the randomness of the trajectory of plume center of mass, Dcm: Deff=D+Dadv-Dcm. The coefficient Dadv is similar to that introduced by Taylor in 1921, and more recent works associate it with the thermodynamic equilibrium. The ``ergodic hypothesis'' says that over large time intervals Dcm vanishes and the effect of the heterogeneity is described by Dadv=Deff-D. In this work we investigate numerically the long time behavior of the effective coefficients as well as the validity of the ergodic hypothesis. The transport in every realization of the velocity field is modeled with the Global Random Walk Algorithm, which is able to track as many particles as necessary to achieve a statistically reliable simulation of the process. Averages over realizations are further used to estimate mean coefficients and standard deviations. In order to remain in the frame of most of the theoretical approaches, the velocity field was generated in a linear approximation and the logarithm of the hydraulic conductivity was taken to be exponential decaying correlated with variance equal to 0.1. Our results show that even in these idealized conditions, the effective coefficients tend to asymptotic constant values only when the plume travels thousands of correlations lengths (while the first order theories usually predict Fickian behavior after tens of correlations lengths) and that the ergodicity conditions are still far from being met.
Analyzing degradation data with a random effects spline regression model
Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip
2017-03-17
This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.
Analyzing degradation data with a random effects spline regression model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip
This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.
Random walk study of electron motion in helium in crossed electromagnetic fields
NASA Technical Reports Server (NTRS)
Englert, G. W.
1972-01-01
Random walk theory, previously adapted to electron motion in the presence of an electric field, is extended to include a transverse magnetic field. In principle, the random walk approach avoids mathematical complexity and concomitant simplifying assumptions and permits determination of energy distributions and transport coefficients within the accuracy of available collisional cross section data. Application is made to a weakly ionized helium gas. Time of relaxation of electron energy distribution, determined by the random walk, is described by simple expressions based on energy exchange between the electron and an effective electric field. The restrictive effect of the magnetic field on electron motion, which increases the required number of collisions per walk to reach a terminal steady state condition, as well as the effect of the magnetic field on electron transport coefficients and mean energy can be quite adequately described by expressions involving only the Hall parameter.
Craig, Benjamin M; Busschbach, Jan JV
2009-01-01
Background To present an episodic random utility model that unifies time trade-off and discrete choice approaches in health state valuation. Methods First, we introduce two alternative random utility models (RUMs) for health preferences: the episodic RUM and the more common instant RUM. For the interpretation of time trade-off (TTO) responses, we show that the episodic model implies a coefficient estimator, and the instant model implies a mean slope estimator. Secondly, we demonstrate these estimators and the differences between the estimates for 42 health states using TTO responses from the seminal Measurement and Valuation in Health (MVH) study conducted in the United Kingdom. Mean slopes are estimates with and without Dolan's transformation of worse-than-death (WTD) responses. Finally, we demonstrate an exploded probit estimator, an extension of the coefficient estimator for discrete choice data that accommodates both TTO and rank responses. Results By construction, mean slopes are less than or equal to coefficients, because slopes are fractions and, therefore, magnify downward errors in WTD responses. The Dolan transformation of WTD responses causes mean slopes to increase in similarity to coefficient estimates, yet they are not equivalent (i.e., absolute mean difference = 0.179). Unlike mean slopes, coefficient estimates demonstrate strong concordance with rank-based predictions (Lin's rho = 0.91). Combining TTO and rank responses under the exploded probit model improves the identification of health state values, decreasing the average width of confidence intervals from 0.057 to 0.041 compared to TTO only results. Conclusion The episodic RUM expands upon the theoretical framework underlying health state valuation and contributes to health econometrics by motivating the selection of coefficient and exploded probit estimators for the analysis of TTO and rank responses. In future MVH surveys, sample size requirements may be reduced through the incorporation of multiple responses under a single estimator. PMID:19144115
A data-driven wavelet-based approach for generating jumping loads
NASA Astrophysics Data System (ADS)
Chen, Jun; Li, Guo; Racic, Vitomir
2018-06-01
This paper suggests an approach to generate human jumping loads using wavelet transform and a database of individual jumping force records. A total of 970 individual jumping force records of various frequencies were first collected by three experiments from 147 test subjects. For each record, every jumping pulse was extracted and decomposed into seven levels by wavelet transform. All the decomposition coefficients were stored in an information database. Probability distributions of jumping cycle period, contact ratio and energy of the jumping pulse were statistically analyzed. Inspired by the theory of DNA recombination, an approach was developed by interchanging the wavelet coefficients between different jumping pulses. To generate a jumping force time history with N pulses, wavelet coefficients were first selected randomly from the database at each level. They were then used to reconstruct N pulses by the inverse wavelet transform. Jumping cycle periods and contract ratios were then generated randomly based on their probabilistic functions. These parameters were assigned to each of the N pulses which were in turn scaled by the amplitude factors βi to account for energy relationship between successive pulses. The final jumping force time history was obtained by linking all the N cycles end to end. This simulation approach can preserve the non-stationary features of the jumping load force in time-frequency domain. Application indicates that this approach can be used to generate jumping force time history due to single people jumping and also can be extended further to stochastic jumping loads due to groups and crowds.
Recovering DC coefficients in block-based DCT.
Uehara, Takeyuki; Safavi-Naini, Reihaneh; Ogunbona, Philip
2006-11-01
It is a common approach for JPEG and MPEG encryption systems to provide higher protection for dc coefficients and less protection for ac coefficients. Some authors have employed a cryptographic encryption algorithm for the dc coefficients and left the ac coefficients to techniques based on random permutation lists which are known to be weak against known-plaintext and chosen-ciphertext attacks. In this paper we show that in block-based DCT, it is possible to recover dc coefficients from ac coefficients with reasonable image quality and show the insecurity of image encryption methods which rely on the encryption of dc values using a cryptoalgorithm. The method proposed in this paper combines dc recovery from ac coefficients and the fact that ac coefficients can be recovered using a chosen ciphertext attack. We demonstrate that a method proposed by Tang to encrypt and decrypt MPEG video can be completely broken.
NASA Astrophysics Data System (ADS)
Yuste, S. B.; Abad, E.; Baumgaertner, A.
2016-07-01
We address the problem of diffusion on a comb whose teeth display varying lengths. Specifically, the length ℓ of each tooth is drawn from a probability distribution displaying power law behavior at large ℓ ,P (ℓ ) ˜ℓ-(1 +α ) (α >0 ). To start with, we focus on the computation of the anomalous diffusion coefficient for the subdiffusive motion along the backbone. This quantity is subsequently used as an input to compute concentration recovery curves mimicking fluorescence recovery after photobleaching experiments in comblike geometries such as spiny dendrites. Our method is based on the mean-field description provided by the well-tested continuous time random-walk approach for the random-comb model, and the obtained analytical result for the diffusion coefficient is confirmed by numerical simulations of a random walk with finite steps in time and space along the backbone and the teeth. We subsequently incorporate retardation effects arising from binding-unbinding kinetics into our model and obtain a scaling law characterizing the corresponding change in the diffusion coefficient. Finally, we show that recovery curves obtained with the help of the analytical expression for the anomalous diffusion coefficient cannot be fitted perfectly by a model based on scaled Brownian motion, i.e., a standard diffusion equation with a time-dependent diffusion coefficient. However, differences between the exact curves and such fits are small, thereby providing justification for the practical use of models relying on scaled Brownian motion as a fitting procedure for recovery curves arising from particle diffusion in comblike systems.
ERIC Educational Resources Information Center
Choi, Kilchan; Seltzer, Michael
2010-01-01
In studies of change in education and numerous other fields, interest often centers on how differences in the status of individuals at the start of a period of substantive interest relate to differences in subsequent change. In this article, the authors present a fully Bayesian approach to estimating three-level Hierarchical Models in which latent…
Long-range correlations and charge transport properties of DNA sequences
NASA Astrophysics Data System (ADS)
Liu, Xiao-liang; Ren, Yi; Xie, Qiong-tao; Deng, Chao-sheng; Xu, Hui
2010-04-01
By using Hurst's analysis and transfer approach, the rescaled range functions and Hurst exponents of human chromosome 22 and enterobacteria phage lambda DNA sequences are investigated and the transmission coefficients, Landauer resistances and Lyapunov coefficients of finite segments based on above genomic DNA sequences are calculated. In a comparison with quasiperiodic and random artificial DNA sequences, we find that λ-DNA exhibits anticorrelation behavior characterized by a Hurst exponent 0.5
Single-image super-resolution based on Markov random field and contourlet transform
NASA Astrophysics Data System (ADS)
Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai
2011-04-01
Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.
NASA Technical Reports Server (NTRS)
Kushner, H. J.
1972-01-01
The field of stochastic stability is surveyed, with emphasis on the invariance theorems and their potential application to systems with randomly varying coefficients. Some of the basic ideas are reviewed, which underlie the stochastic Liapunov function approach to stochastic stability. The invariance theorems are discussed in detail.
Photon diffusion coefficient in scattering and absorbing media.
Pierrat, Romain; Greffet, Jean-Jacques; Carminati, Rémi
2006-05-01
We present a unified derivation of the photon diffusion coefficient for both steady-state and time-dependent transport in disordered absorbing media. The derivation is based on a modal analysis of the time-dependent radiative transfer equation. This approach confirms that the dynamic diffusion coefficient is given by the random-walk result D = cl(*)/3, where l(*) is the transport mean free path and c is the energy velocity, independent of the level of absorption. It also shows that the diffusion coefficient for steady-state transport, often used in biomedical optics, depends on absorption, in agreement with recent theoretical and experimental works. These two results resolve a recurrent controversy in light propagation and imaging in scattering media.
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard; Felus, Yaron A.
2008-06-01
The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.
Stochastic uncertainty analysis for unconfined flow systems
Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming
2006-01-01
A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen‐Loeve decomposition‐based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen‐Loeve decomposition, polynomial expansion, and perturbation methods. The random log‐transformed hydraulic conductivity field (lnKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of lnKS. Next, head h is decomposed as a perturbation expansion series Σh(m), where h(m) represents the mth‐order head term with respect to the standard deviation of lnKS. Then h(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients hi1,i2,...,im(m) are deterministic and solved sequentially from low to high expansion orders using MODFLOW‐2000. Finally, the statistics of head and flux are computed using simple algebraic operations on hi1,i2,...,im(m). A series of numerical test results in 2‐D and 3‐D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique.
Revisiting crash spatial heterogeneity: A Bayesian spatially varying coefficients approach.
Xu, Pengpeng; Huang, Helai; Dong, Ni; Wong, S C
2017-01-01
This study was performed to investigate the spatially varying relationships between crash frequency and related risk factors. A Bayesian spatially varying coefficients model was elaborately introduced as a methodological alternative to simultaneously account for the unstructured and spatially structured heterogeneity of the regression coefficients in predicting crash frequencies. The proposed method was appealing in that the parameters were modeled via a conditional autoregressive prior distribution, which involved a single set of random effects and a spatial correlation parameter with extreme values corresponding to pure unstructured or pure spatially correlated random effects. A case study using a three-year crash dataset from the Hillsborough County, Florida, was conducted to illustrate the proposed model. Empirical analysis confirmed the presence of both unstructured and spatially correlated variations in the effects of contributory factors on severe crash occurrences. The findings also suggested that ignoring spatially structured heterogeneity may result in biased parameter estimates and incorrect inferences, while assuming the regression coefficients to be spatially clustered only is probably subject to the issue of over-smoothness. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jude Hemanth, Duraisamy; Umamaheswari, Subramaniyan; Popescu, Daniela Elena; Naaji, Antoanela
2016-01-01
Image steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT) and Finite Ridgelet Transform (FRIT) are used in combination with GA and PSO to improve the efficiency of the image steganography system.
Neither fixed nor random: weighted least squares meta-regression.
Stanley, T D; Doucouliagos, Hristos
2017-03-01
Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong
2011-01-01
Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.
Individualizing drug dosage with longitudinal data.
Zhu, Xiaolu; Qu, Annie
2016-10-30
We propose a two-step procedure to personalize drug dosage over time under the framework of a log-linear mixed-effect model. We model patients' heterogeneity using subject-specific random effects, which are treated as the realizations of an unspecified stochastic process. We extend the conditional quadratic inference function to estimate both fixed-effect coefficients and individual random effects on a longitudinal training data sample in the first step and propose an adaptive procedure to estimate new patients' random effects and provide dosage recommendations for new patients in the second step. An advantage of our approach is that we do not impose any distribution assumption on estimating random effects. Moreover, the new approach can accommodate more general time-varying covariates corresponding to random effects. We show in theory and numerical studies that the proposed method is more efficient compared with existing approaches, especially when covariates are time varying. In addition, a real data example of a clozapine study confirms that our two-step procedure leads to more accurate drug dosage recommendations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Least-squares Minimization Approaches to Interpret Total Magnetic Anomalies Due to Spheres
NASA Astrophysics Data System (ADS)
Abdelrahman, E. M.; El-Araby, T. M.; Soliman, K. S.; Essa, K. S.; Abo-Ezz, E. R.
2007-05-01
We have developed three different least-squares approaches to determine successively: the depth, magnetic angle, and amplitude coefficient of a buried sphere from a total magnetic anomaly. By defining the anomaly value at the origin and the nearest zero-anomaly distance from the origin on the profile, the problem of depth determination is transformed into the problem of finding a solution of a nonlinear equation of the form f(z)=0. Knowing the depth and applying the least-squares method, the magnetic angle and amplitude coefficient are determined using two simple linear equations. In this way, the depth, magnetic angle, and amplitude coefficient are determined individually from all observed total magnetic data. The method is applied to synthetic examples with and without random errors and tested on a field example from Senegal, West Africa. In all cases, the depth solutions are in good agreement with the actual ones.
Development of an Uncertainty Model for the National Transonic Facility
NASA Technical Reports Server (NTRS)
Walter, Joel A.; Lawrence, William R.; Elder, David W.; Treece, Michael D.
2010-01-01
This paper introduces an uncertainty model being developed for the National Transonic Facility (NTF). The model uses a Monte Carlo technique to propagate standard uncertainties of measured values through the NTF data reduction equations to calculate the combined uncertainties of the key aerodynamic force and moment coefficients and freestream properties. The uncertainty propagation approach to assessing data variability is compared with ongoing data quality assessment activities at the NTF, notably check standard testing using statistical process control (SPC) techniques. It is shown that the two approaches are complementary and both are necessary tools for data quality assessment and improvement activities. The SPC approach is the final arbiter of variability in a facility. Its result encompasses variation due to people, processes, test equipment, and test article. The uncertainty propagation approach is limited mainly to the data reduction process. However, it is useful because it helps to assess the causes of variability seen in the data and consequently provides a basis for improvement. For example, it is shown that Mach number random uncertainty is dominated by static pressure variation over most of the dynamic pressure range tested. However, the random uncertainty in the drag coefficient is generally dominated by axial and normal force uncertainty with much less contribution from freestream conditions.
Migration of lymphocytes on fibronectin-coated surfaces: temporal evolution of migratory parameters
NASA Technical Reports Server (NTRS)
Bergman, A. J.; Zygourakis, K.; McIntire, L. V. (Principal Investigator)
1999-01-01
Lymphocytes typically interact with implanted biomaterials through adsorbed exogenous proteins. To provide a more complete characterization of these interactions, analysis of lymphocyte migration on adsorbed extracellular matrix proteins must accompany the commonly performed adhesion studies. We report here a comparison of the migratory and adhesion behavior of Jurkat cells (a T lymphoblastoid cell line) on tissue culture treated and untreated polystyrene surfaces coated with various concentrations of fibronectin. The average speed of cell locomotion showed a biphasic response to substrate adhesiveness for cells migrating on untreated polystyrene and a monotonic decrease for cells migrating on tissue culture-treated polystyrene. A modified approach to the persistent random walk model was implemented to determine the time dependence of cell migration parameters. The random motility coefficient showed significant increases with time when cells migrated on tissue culture-treated polystyrene surfaces, while it remained relatively constant for experiments with untreated polystyrene plates. Finally, a cell migration computer model was developed to verify our modified persistent random walk analysis. Simulation results suggest that our experimental data were consistent with temporally increasing random motility coefficients.
Genetic parameters for stayability to consecutive calvings in Zebu cattle.
Silva, D O; Santana, M L; Ayres, D R; Menezes, G R O; Silva, L O C; Nobre, P R C; Pereira, R J
2017-12-22
Longer-lived cows tend to be more profitable and the stayability trait is a selection criterion correlated to longevity. An alternative to the traditional approach to evaluate stayability is its definition based on consecutive calvings, whose main advantage is the more accurate evaluation of young bulls. However, no study using this alternative approach has been conducted for Zebu breeds. Therefore, the objective of this study was to compare linear random regression models to fit stayability to consecutive calvings of Guzerá, Nelore and Tabapuã cows and to estimate genetic parameters for this trait in the respective breeds. Data up to the eighth calving were used. The models included the fixed effects of age at first calving and year-season of birth of the cow and the random effects of contemporary group, additive genetic, permanent environmental and residual. Random regressions were modeled by orthogonal Legendre polynomials of order 1 to 4 (2 to 5 coefficients) for contemporary group, additive genetic and permanent environmental effects. Using Deviance Information Criterion as the selection criterion, the model with 4 regression coefficients for each effect was the most adequate for the Nelore and Tabapuã breeds and the model with 5 coefficients is recommended for the Guzerá breed. For Guzerá, heritabilities ranged from 0.05 to 0.08, showing a quadratic trend with a peak between the fourth and sixth calving. For the Nelore and Tabapuã breeds, the estimates ranged from 0.03 to 0.07 and from 0.03 to 0.08, respectively, and increased with increasing calving number. The additive genetic correlations exhibited a similar trend among breeds and were higher for stayability between closer calvings. Even between more distant calvings (second v. eighth), stayability showed a moderate to high genetic correlation, which was 0.77, 0.57 and 0.79 for the Guzerá, Nelore and Tabapuã breeds, respectively. For Guzerá, when the models with 4 or 5 regression coefficients were compared, the rank correlations between predicted breeding values for the intercept were always higher than 0.99, indicating the possibility of practical application of the least parameterized model. In conclusion, the model with 4 random regression coefficients is recommended for the genetic evaluation of stayability to consecutive calvings in Zebu cattle.
Evolution of the concentration PDF in random environments modeled by global random walk
NASA Astrophysics Data System (ADS)
Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter
2013-04-01
The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and speeds up the computation by orders of magnitude. The approach is illustrated for the transport of passive scalars in heterogeneous aquifers, with hydraulic conductivity modeled as a random field.
Loskutov, V V; Sevriugin, V A
2013-05-01
This article presents a new approximation describing fluid diffusion in porous media. Time dependence of the self-diffusion coefficient D(t) in the permeable porous medium is studied based on the assumption that diffusant molecules move randomly. An analytical expression for time dependence of the self-diffusion coefficient was obtained in the following form: D(t)=(D0-D∞)exp(-D0t/λ)+D∞, where D0 is the self-diffusion coefficient of bulk fluid, D∞ is the asymptotic value of the self-diffusion coefficient in the limit of long time values (t→∞), λ is the characteristic parameter of this porous medium with dimensionality of length. Applicability of the solution obtained to the analysis of experimental data is shown. The possibility of passing to short-time and long-time regimes is discussed. Copyright © 2013 Elsevier Inc. All rights reserved.
Extended Mixed-Efects Item Response Models with the MH-RM Algorithm
ERIC Educational Resources Information Center
Chalmers, R. Philip
2015-01-01
A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…
A new approach for beam hardening correction based on the local spectrum distributions
NASA Astrophysics Data System (ADS)
Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza
2015-09-01
Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called "beam hardening". The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile.
Application of random effects to the study of resource selection by animals
Gillies, C.S.; Hebblewhite, M.; Nielsen, S.E.; Krawchuk, M.A.; Aldridge, Cameron L.; Frair, J.L.; Saher, D.J.; Stevens, C.E.; Jerde, C.L.
2006-01-01
1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence.2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability.3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed.4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects.5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection.6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.
Application of random effects to the study of resource selection by animals.
Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L
2006-07-01
1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gottfredson, Nisha C; Sterba, Sonya K; Jackson, Kristina M
2017-01-01
Random coefficient-dependent (RCD) missingness is a non-ignorable mechanism through which missing data can arise in longitudinal designs. RCD, for which we cannot test, is a problematic form of missingness that occurs if subject-specific random effects correlate with propensity for missingness or dropout. Particularly when covariate missingness is a problem, investigators typically handle missing longitudinal data by using single-level multiple imputation procedures implemented with long-format data, which ignores within-person dependency entirely, or implemented with wide-format (i.e., multivariate) data, which ignores some aspects of within-person dependency. When either of these standard approaches to handling missing longitudinal data is used, RCD missingness leads to parameter bias and incorrect inference. We explain why multilevel multiple imputation (MMI) should alleviate bias induced by a RCD missing data mechanism under conditions that contribute to stronger determinacy of random coefficients. We evaluate our hypothesis with a simulation study. Three design factors are considered: intraclass correlation (ICC; ranging from .25 to .75), number of waves (ranging from 4 to 8), and percent of missing data (ranging from 20 to 50%). We find that MMI greatly outperforms the single-level wide-format (multivariate) method for imputation under a RCD mechanism. For the MMI analyses, bias was most alleviated when the ICC is high, there were more waves of data, and when there was less missing data. Practical recommendations for handling longitudinal missing data are suggested.
Dissecting random and systematic differences between noisy composite data sets.
Diederichs, Kay
2017-04-01
Composite data sets measured on different objects are usually affected by random errors, but may also be influenced by systematic (genuine) differences in the objects themselves, or the experimental conditions. If the individual measurements forming each data set are quantitative and approximately normally distributed, a correlation coefficient is often used to compare data sets. However, the relations between data sets are not obvious from the matrix of pairwise correlations since the numerical value of the correlation coefficient is lowered by both random and systematic differences between the data sets. This work presents a multidimensional scaling analysis of the pairwise correlation coefficients which places data sets into a unit sphere within low-dimensional space, at a position given by their CC* values [as defined by Karplus & Diederichs (2012), Science, 336, 1030-1033] in the radial direction and by their systematic differences in one or more angular directions. This dimensionality reduction can not only be used for classification purposes, but also to derive data-set relations on a continuous scale. Projecting the arrangement of data sets onto the subspace spanned by systematic differences (the surface of a unit sphere) allows, irrespective of the random-error levels, the identification of clusters of closely related data sets. The method gains power with increasing numbers of data sets. It is illustrated with an example from low signal-to-noise ratio image processing, and an application in macromolecular crystallography is shown, but the approach is completely general and thus should be widely applicable.
Generalized linear mixed models with varying coefficients for longitudinal data.
Zhang, Daowen
2004-03-01
The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.
Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos
Santonja, F.; Chen-Charpentier, B.
2012-01-01
Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889
Bayesian estimation of the discrete coefficient of determination.
Chen, Ting; Braga-Neto, Ulisses M
2016-12-01
The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.
Kucza, Witold
2013-07-25
Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
Stego on FPGA: An IWT Approach
Ramalingam, Balakrishnan
2014-01-01
A reconfigurable hardware architecture for the implementation of integer wavelet transform (IWT) based adaptive random image steganography algorithm is proposed. The Haar-IWT was used to separate the subbands namely, LL, LH, HL, and HH, from 8 × 8 pixel blocks and the encrypted secret data is hidden in the LH, HL, and HH blocks using Moore and Hilbert space filling curve (SFC) scan patterns. Either Moore or Hilbert SFC was chosen for hiding the encrypted data in LH, HL, and HH coefficients, whichever produces the lowest mean square error (MSE) and the highest peak signal-to-noise ratio (PSNR). The fixated random walk's verdict of all blocks is registered which is nothing but the furtive key. Our system took 1.6 µs for embedding the data in coefficient blocks and consumed 34% of the logic elements, 22% of the dedicated logic register, and 2% of the embedded multiplier on Cyclone II field programmable gate array (FPGA). PMID:24723794
Constraining the interior density profile of a Jovian planet from precision gravity field data
NASA Astrophysics Data System (ADS)
Movshovitz, Naor; Fortney, Jonathan J.; Helled, Ravit; Hubbard, William B.; Thorngren, Daniel; Mankovich, Chris; Wahl, Sean; Militzer, Burkhard; Durante, Daniele
2017-10-01
The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properly interpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about the formation mechanism of the planet. Planetary gravity fields are usually described by the coefficients in an expansion of the gravitational potential. Recently, high precision measurements of these coefficients for Jupiter and Saturn have been made by the radio science instruments on the Juno and Cassini spacecraft, respectively.The resulting coefficients come with an associated uncertainty. And while the task of matching a given density profile with a given set of gravity coefficients is relatively straightforward, the question of how best to account for the uncertainty is not. In essentially all prior work on matching models to gravity field data, inferences about planetary structure have rested on imperfect knowledge of the H/He equation of state and on the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet, constrained only by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly constrained by them.We demonstrate this approach with a sample of Jupiter interior models based on recent Juno data and discuss prospects for Saturn.
NASA Astrophysics Data System (ADS)
Park, Jungmin; Choi, Yong-Sang
2018-04-01
Observationally constrained values of the global radiative response coefficient are pivotal to assess the reliability of modeled climate feedbacks. A widely used approach is to measure transient global radiative imbalance related to surface temperature changes. However, in this approach, a potential error in the estimate of radiative response coefficients may arise from surface inhomogeneity in the climate system. We examined this issue theoretically using a simple two-zone energy balance model. Here, we dealt with the potential error by subtracting the prescribed radiative response coefficient from those calculated within the two-zone framework. Each zone was characterized by the different magnitude of the radiative response coefficient and the surface heat capacity, and the dynamical heat transport in the atmosphere between the zones was parameterized as a linear function of the temperature difference between the zones. Then, the model system was forced by randomly generated monthly varying forcing mimicking time-varying forcing like an observation. The repeated simulations showed that inhomogeneous surface heat capacity causes considerable miscalculation (down to -1.4 W m-2 K-1 equivalent to 31.3% of the prescribed value) in the global radiative response coefficient. Also, the dynamical heat transport reduced this miscalculation driven by inhomogeneity of surface heat capacity. Therefore, the estimation of radiative response coefficients using the surface temperature-radiation relation is appropriate for homogeneous surface areas least affected by the exterior.
Random-growth urban model with geographical fitness
NASA Astrophysics Data System (ADS)
Kii, Masanobu; Akimoto, Keigo; Doi, Kenji
2012-12-01
This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.
Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable
ERIC Educational Resources Information Center
du Toit, Stephen H. C.; Cudeck, Robert
2009-01-01
A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…
Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks
2016-01-01
Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330
Adaptive threshold shearlet transform for surface microseismic data denoising
NASA Astrophysics Data System (ADS)
Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan
2018-06-01
Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.
Miyamoto, Shuichi; Atsuyama, Kenji; Ekino, Keisuke; Shin, Takashi
2018-01-01
The isolation of useful microbes is one of the traditional approaches for the lead generation in drug discovery. As an effective technique for microbe isolation, we recently developed a multidimensional diffusion-based gradient culture system of microbes. In order to enhance the utility of the system, it is favorable to have diffusion coefficients of nutrients such as sugars in the culture medium beforehand. We have, therefore, built a simple and convenient experimental system that uses agar-gel to observe diffusion. Next, we performed computer simulations-based on random-walk concepts-of the experimental diffusion system and derived correlation formulas that relate observable diffusion data to diffusion coefficients. Finally, we applied these correlation formulas to our experimentally-determined diffusion data to estimate the diffusion coefficients of sugars. Our values for these coefficients agree reasonably well with values published in the literature. The effectiveness of our simple technique, which has elucidated the diffusion coefficients of some molecules which are rarely reported (e.g., galactose, trehalose, and glycerol) is demonstrated by the strong correspondence between the literature values and those obtained in our experiments.
NASA Astrophysics Data System (ADS)
Guerrout, EL-Hachemi; Ait-Aoudia, Samy; Michelucci, Dominique; Mahiou, Ramdane
2018-05-01
Many routine medical examinations produce images of patients suffering from various pathologies. With the huge number of medical images, the manual analysis and interpretation became a tedious task. Thus, automatic image segmentation became essential for diagnosis assistance. Segmentation consists in dividing the image into homogeneous and significant regions. We focus on hidden Markov random fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimisation problem. Broyden-Fletcher-Goldfarb-Shanno algorithm referred to as BFGS is one of the most powerful methods to solve unconstrained optimisation problem. In this paper, we investigate the combination of HMRF and BFGS algorithm to perform the segmentation operation. The proposed method shows very good segmentation results comparing with well-known approaches. The tests are conducted on brain magnetic resonance image databases (BrainWeb and IBSR) largely used to objectively confront the results obtained. The well-known Dice coefficient (DC) was used as similarity metric. The experimental results show that, in many cases, our proposed method approaches the perfect segmentation with a Dice Coefficient above .9. Moreover, it generally outperforms other methods in the tests conducted.
On Learning Cluster Coefficient of Private Networks
Wang, Yue; Wu, Xintao; Zhu, Jun; Xiang, Yang
2013-01-01
Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as clustering coefficient or modularity often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we treat a graph statistics as a function f and develop a divide and conquer approach to enforce differential privacy. The basic procedure of this approach is to first decompose the target computation f into several less complex unit computations f1, …, fm connected by basic mathematical operations (e.g., addition, subtraction, multiplication, division), then perturb the output of each fi with Laplace noise derived from its own sensitivity value and the distributed privacy threshold εi, and finally combine those perturbed fi as the perturbed output of computation f. We examine how various operations affect the accuracy of complex computations. When unit computations have large global sensitivity values, we enforce the differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We illustrate our approach by using clustering coefficient, which is a popular statistics used in social network analysis. Empirical evaluations on five real social networks and various synthetic graphs generated from three random graph models show the developed divide and conquer approach outperforms the direct approach. PMID:24429843
A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2014-06-15
This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less
ERIC Educational Resources Information Center
Kong, Nan
2007-01-01
In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…
Efficient sampling of complex network with modified random walk strategies
NASA Astrophysics Data System (ADS)
Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei
2018-02-01
We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
Strategic Use of Random Subsample Replication and a Coefficient of Factor Replicability
ERIC Educational Resources Information Center
Katzenmeyer, William G.; Stenner, A. Jackson
1975-01-01
The problem of demonstrating replicability of factor structure across random variables is addressed. Procedures are outlined which combine the use of random subsample replication strategies with the correlations between factor score estimates across replicate pairs to generate a coefficient of replicability and confidence intervals associated with…
Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Tao, E-mail: tzhou@lsec.cc.ac.c; Tang Tao, E-mail: ttang@hkbu.edu.h
2010-11-01
In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.
Offdiagonal complexity: A computationally quick complexity measure for graphs and networks
NASA Astrophysics Data System (ADS)
Claussen, Jens Christian
2007-02-01
A vast variety of biological, social, and economical networks shows topologies drastically differing from random graphs; yet the quantitative characterization remains unsatisfactory from a conceptual point of view. Motivated from the discussion of small scale-free networks, a biased link distribution entropy is defined, which takes an extremum for a power-law distribution. This approach is extended to the node-node link cross-distribution, whose nondiagonal elements characterize the graph structure beyond link distribution, cluster coefficient and average path length. From here a simple (and computationally cheap) complexity measure can be defined. This offdiagonal complexity (OdC) is proposed as a novel measure to characterize the complexity of an undirected graph, or network. While both for regular lattices and fully connected networks OdC is zero, it takes a moderately low value for a random graph and shows high values for apparently complex structures as scale-free networks and hierarchical trees. The OdC approach is applied to the Helicobacter pylori protein interaction network and randomly rewired surrogates.
Wave-induced fluid flow in random porous media: Attenuation and dispersion of elastic waves
NASA Astrophysics Data System (ADS)
Müller, Tobias M.; Gurevich, Boris
2005-05-01
A detailed analysis of the relationship between elastic waves in inhomogeneous, porous media and the effect of wave-induced fluid flow is presented. Based on the results of the poroelastic first-order statistical smoothing approximation applied to Biot's equations of poroelasticity, a model for elastic wave attenuation and dispersion due to wave-induced fluid flow in 3-D randomly inhomogeneous poroelastic media is developed. Attenuation and dispersion depend on linear combinations of the spatial correlations of the fluctuating poroelastic parameters. The observed frequency dependence is typical for a relaxation phenomenon. Further, the analytic properties of attenuation and dispersion are analyzed. It is shown that the low-frequency asymptote of the attenuation coefficient of a plane compressional wave is proportional to the square of frequency. At high frequencies the attenuation coefficient becomes proportional to the square root of frequency. A comparison with the 1-D theory shows that attenuation is of the same order but slightly larger in 3-D random media. Several modeling choices of the approach including the effect of cross correlations between fluid and solid phase properties are demonstrated. The potential application of the results to real porous materials is discussed. .
Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S
2016-06-01
We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi
2018-04-01
The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.
Jaffa, Miran A; Gebregziabher, Mulugeta; Jaffa, Ayad A
2015-06-14
Renal transplant patients are mandated to have continuous assessment of their kidney function over time to monitor disease progression determined by changes in blood urea nitrogen (BUN), serum creatinine (Cr), and estimated glomerular filtration rate (eGFR). Multivariate analysis of these outcomes that aims at identifying the differential factors that affect disease progression is of great clinical significance. Thus our study aims at demonstrating the application of different joint modeling approaches with random coefficients on a cohort of renal transplant patients and presenting a comparison of their performance through a pseudo-simulation study. The objective of this comparison is to identify the model with best performance and to determine whether accuracy compensates for complexity in the different multivariate joint models. We propose a novel application of multivariate Generalized Linear Mixed Models (mGLMM) to analyze multiple longitudinal kidney function outcomes collected over 3 years on a cohort of 110 renal transplantation patients. The correlated outcomes BUN, Cr, and eGFR and the effect of various covariates such patient's gender, age and race on these markers was determined holistically using different mGLMMs. The performance of the various mGLMMs that encompass shared random intercept (SHRI), shared random intercept and slope (SHRIS), separate random intercept (SPRI) and separate random intercept and slope (SPRIS) was assessed to identify the one that has the best fit and most accurate estimates. A bootstrap pseudo-simulation study was conducted to gauge the tradeoff between the complexity and accuracy of the models. Accuracy was determined using two measures; the mean of the differences between the estimates of the bootstrapped datasets and the true beta obtained from the application of each model on the renal dataset, and the mean of the square of these differences. The results showed that SPRI provided most accurate estimates and did not exhibit any computational or convergence problem. Higher accuracy was demonstrated when the level of complexity increased from shared random coefficient models to the separate random coefficient alternatives with SPRI showing to have the best fit and most accurate estimates.
Neither fixed nor random: weighted least squares meta-analysis.
Stanley, T D; Doucouliagos, Hristos
2015-06-15
This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Movshovitz, N.; Fortney, J. J.; Helled, R.; Hubbard, W. B.; Mankovich, C.; Thorngren, D.; Wahl, S. M.; Militzer, B.; Durante, D.
2017-12-01
The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properlyinterpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about theformation mechanism of the planet. Recently, very high precision measurements of the gravity coefficients for Saturn have been made by the radio science instrument on the Cassini spacecraft during its Grand Finale orbits. The resulting coefficients come with an associated uncertainty. The task of matching a given density profile to a given set of gravity coefficients is relatively straightforward, but the question of how to best account for the uncertainty is not. In essentially all prior work on matching models to gravity field data inferences about planetary structure have rested on assumptions regarding the imperfectly known H/He equation of state and the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet constrained by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also Bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly constrained by them. We apply this approach to produce a sample of Saturn interior models based on gravity data from Grand Finale orbits and discuss their implications.
Vehicular traffic noise prediction using soft computing approach.
Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek
2016-12-01
A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Variational Solutions and Random Dynamical Systems to SPDEs Perturbed by Fractional Gaussian Noise
Zeng, Caibin; Yang, Qigui; Cao, Junfei
2014-01-01
This paper deals with the following type of stochastic partial differential equations (SPDEs) perturbed by an infinite dimensional fractional Brownian motion with a suitable volatility coefficient Φ: dX(t) = A(X(t))dt+Φ(t)dB H(t), where A is a nonlinear operator satisfying some monotonicity conditions. Using the variational approach, we prove the existence and uniqueness of variational solutions to such system. Moreover, we prove that this variational solution generates a random dynamical system. The main results are applied to a general type of nonlinear SPDEs and the stochastic generalized p-Laplacian equation. PMID:24574903
NASA Technical Reports Server (NTRS)
Frehlich, Rod
1993-01-01
Calculations of the exact Cramer-Rao Bound (CRB) for unbiased estimates of the mean frequency, signal power, and spectral width of Doppler radar/lidar signals (a Gaussian random process) are presented. Approximate CRB's are derived using the Discrete Fourier Transform (DFT). These approximate results are equal to the exact CRB when the DFT coefficients are mutually uncorrelated. Previous high SNR limits for CRB's are shown to be inaccurate because the discrete summations cannot be approximated with integration. The performance of an approximate maximum likelihood estimator for mean frequency approaches the exact CRB for moderate signal to noise ratio and moderate spectral width.
Yamazaki, Takeshi; Takeda, Hisato; Hagiya, Koichi; Yamaguchi, Satoshi; Sasaki, Osamu
2018-03-13
Because lactation periods in dairy cows lengthen with increasing total milk production, it is important to predict individual productivities after 305 days in milk (DIM) to determine the optimal lactation period. We therefore examined whether the random regression (RR) coefficient from 306 to 450 DIM (M2) can be predicted from those during the first 305 DIM (M1) by using a random regression model. We analyzed test-day milk records from 85690 Holstein cows in their first lactations and 131727 cows in their later (second to fifth) lactations. Data in M1 and M2 were analyzed separately by using different single-trait RR animal models. We then performed a multiple regression analysis of the RR coefficients of M2 on those of M1 during the first and later lactations. The first-order Legendre polynomials were practical covariates of random regression for the milk yields of M2. All RR coefficients for the additive genetic (AG) effect and the intercept for the permanent environmental (PE) effect of M2 had moderate to strong correlations with the intercept for the AG effect of M1. The coefficients of determination for multiple regression of the combined intercepts for the AG and PE effects of M2 on the coefficients for the AG effect of M1 were moderate to high. The daily milk yields of M2 predicted by using the RR coefficients for the AG effect of M1 were highly correlated with those obtained by using the coefficients of M2. Milk production after 305 DIM can be predicted by using the RR coefficient estimates of the AG effect during the first 305 DIM.
Seroussi, Inbar; Grebenkov, Denis S.; Pasternak, Ofer; Sochen, Nir
2017-01-01
In order to bridge microscopic molecular motion with macroscopic diffusion MR signal in complex structures, we propose a general stochastic model for molecular motion in a magnetic field. The Fokker-Planck equation of this model governs the probability density function describing the diffusion-magnetization propagator. From the propagator we derive a generalized version of the Bloch-Torrey equation and the relation to the random phase approach. This derivation does not require assumptions such as a spatially constant diffusion coefficient, or ad-hoc selection of a propagator. In particular, the boundary conditions that implicitly incorporate the microstructure into the diffusion MR signal can now be included explicitly through a spatially varying diffusion coefficient. While our generalization is reduced to the conventional Bloch-Torrey equation for piecewise constant diffusion coefficients, it also predicts scenarios in which an additional term to the equation is required to fully describe the MR signal. PMID:28242566
NASA Astrophysics Data System (ADS)
Crevillén-García, D.; Power, H.
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Crevillén-García, D; Power, H
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Power, H.
2017-01-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974
Is the Non-Dipole Magnetic Field Random?
NASA Technical Reports Server (NTRS)
Walker, Andrew D.; Backus, George E.
1996-01-01
Statistical modelling of the Earth's magnetic field B has a long history. In particular, the spherical harmonic coefficients of scalar fields derived from B can be treated as Gaussian random variables. In this paper, we give examples of highly organized fields whose spherical harmonic coefficients pass tests for independent Gaussian random variables. The fact that coefficients at some depth may be usefully summarized as independent samples from a normal distribution need not imply that there really is some physical, random process at that depth. In fact, the field can be extremely structured and still be regarded for some purposes as random. In this paper, we examined the radial magnetic field B(sub r) produced by the core, but the results apply to any scalar field on the core-mantle boundary (CMB) which determines B outside the CMB.
Random field assessment of nanoscopic inhomogeneity of bone
Dong, X. Neil; Luo, Qing; Sparkman, Daniel M.; Millwater, Harry R.; Wang, Xiaodu
2010-01-01
Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to present the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. PMID:20817128
Perturbed effects at radiation physics
NASA Astrophysics Data System (ADS)
Külahcı, Fatih; Şen, Zekâi
2013-09-01
Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.
NASA Astrophysics Data System (ADS)
Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.
2018-07-01
The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.
NASA Astrophysics Data System (ADS)
Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua
2017-12-01
In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.
Improved estimates of partial volume coefficients from noisy brain MRI using spatial context.
Manjón, José V; Tohka, Jussi; Robles, Montserrat
2010-11-01
This paper addresses the problem of accurate voxel-level estimation of tissue proportions in the human brain magnetic resonance imaging (MRI). Due to the finite resolution of acquisition systems, MRI voxels can contain contributions from more than a single tissue type. The voxel-level estimation of this fractional content is known as partial volume coefficient estimation. In the present work, two new methods to calculate the partial volume coefficients under noisy conditions are introduced and compared with current similar methods. Concretely, a novel Markov Random Field model allowing sharp transitions between partial volume coefficients of neighbouring voxels and an advanced non-local means filtering technique are proposed to reduce the errors due to random noise in the partial volume coefficient estimation. In addition, a comparison was made to find out how the different methodologies affect the measurement of the brain tissue type volumes. Based on the obtained results, the main conclusions are that (1) both Markov Random Field modelling and non-local means filtering improved the partial volume coefficient estimation results, and (2) non-local means filtering was the better of the two strategies for partial volume coefficient estimation. Copyright 2010 Elsevier Inc. All rights reserved.
Speech Enhancement Using Gaussian Scale Mixture Models
Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.
2011-01-01
This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139
Probability and Cumulative Density Function Methods for the Stochastic Advection-Reaction Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barajas-Solano, David A.; Tartakovsky, Alexandre M.
We present a cumulative density function (CDF) method for the probabilistic analysis of $d$-dimensional advection-dominated reactive transport in heterogeneous media. We employ a probabilistic approach in which epistemic uncertainty on the spatial heterogeneity of Darcy-scale transport coefficients is modeled in terms of random fields with given correlation structures. Our proposed CDF method employs a modified Large-Eddy-Diffusivity (LED) approach to close and localize the nonlocal equations governing the one-point PDF and CDF of the concentration field, resulting in a $(d + 1)$ dimensional PDE. Compared to the classsical LED localization, the proposed modified LED localization explicitly accounts for the mean-field advectivemore » dynamics over the phase space of the PDF and CDF. To illustrate the accuracy of the proposed closure, we apply our CDF method to one-dimensional single-species reactive transport with uncertain, heterogeneous advection velocities and reaction rates modeled as random fields.« less
Influence of Dissipative Particle Dynamics parameters and wall models on planar micro-channel flows
NASA Astrophysics Data System (ADS)
Wang, Yuyi; She, Jiangwei; Zhou, Zhe-Wei; microflow Group Team
2017-11-01
Dissipative Particle Dynamics (DPD) is a very effective approach in simulating mesoscale hydrodynamics. The influence of solid boundaries and DPD parameters are typically very strong in DPD simulations. The present work studies a micro-channel Poisseuille flow. Taking the neutron scattering experiment and molecular dynamics simulation result as bench mark, the DPD results of density distribution and velocity profile are systematically studied. The influence of different levels of coarse-graining, the number densities of wall and fluid, conservative force coefficients, random and dissipative force coefficients, different wall model and reflective boundary conditions are discussed. Some mechanisms behind such influences are discussed and the artifacts in the simulation are identified with the bench mark. Chinese natural science foundation (A020405).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Javadi, M.; Abdi, Y., E-mail: y.abdi@ut.ac.ir
2015-08-14
Monte Carlo continuous time random walk simulation is used to study the effects of confinement on electron transport, in porous TiO{sub 2}. In this work, we have introduced a columnar structure instead of the thick layer of porous TiO{sub 2} used as anode in conventional dye solar cells. Our simulation results show that electron diffusion coefficient in the proposed columnar structure is significantly higher than the diffusion coefficient in the conventional structure. It is shown that electron diffusion in the columnar structure depends both on the cross section area of the columns and the porosity of the structure. Also, wemore » demonstrate that such enhanced electron diffusion can be realized in the columnar photo-electrodes with a cross sectional area of ∼1 μm{sup 2} and porosity of 55%, by a simple and low cost fabrication process. Our results open up a promising approach to achieve solar cells with higher efficiencies by engineering the photo-electrode structure.« less
NASA Astrophysics Data System (ADS)
Javadi, M.; Abdi, Y.
2015-08-01
Monte Carlo continuous time random walk simulation is used to study the effects of confinement on electron transport, in porous TiO2. In this work, we have introduced a columnar structure instead of the thick layer of porous TiO2 used as anode in conventional dye solar cells. Our simulation results show that electron diffusion coefficient in the proposed columnar structure is significantly higher than the diffusion coefficient in the conventional structure. It is shown that electron diffusion in the columnar structure depends both on the cross section area of the columns and the porosity of the structure. Also, we demonstrate that such enhanced electron diffusion can be realized in the columnar photo-electrodes with a cross sectional area of ˜1 μm2 and porosity of 55%, by a simple and low cost fabrication process. Our results open up a promising approach to achieve solar cells with higher efficiencies by engineering the photo-electrode structure.
NASA Astrophysics Data System (ADS)
Tasolamprou, A. C.; Mitov, M.; Zografopoulos, D. C.; Kriezis, E. E.
2009-03-01
Single-layer cholesteric liquid crystals exhibit a reflection coefficient which is at most 50% for unpolarized incident light. We give theoretical and experimental evidence of single-layer polymer-stabilized cholesteric liquid-crystalline structures that demonstrate hyper-reflective properties. Such original features are derived by the concurrent and randomly interlaced presence of both helicities. The fundamental properties of such structures are revealed by detailed numerical simulations based on a stochastic approach.
Aircraft Airframe Cost Estimation Using a Random Coefficients Model
1979-12-01
approach will also be used here. 2 Model Formulation Several different types of equations could be used for the basic form of the CER, such as linear ...5) Marcotte developed several CER’s for fighter aircraft airframes using the log- linear model . A plot of the residuals from the CER for recurring...of the natural logarithm. Ordinary Least Squares The ordinary least squares procedure starts with the equation for the general linear model . The
Random diffusion and leverage effect in financial markets.
Perelló, Josep; Masoliver, Jaume
2003-03-01
We prove that Brownian market models with random diffusion coefficients provide an exact measure of the leverage effect [J-P. Bouchaud et al., Phys. Rev. Lett. 87, 228701 (2001)]. This empirical fact asserts that past returns are anticorrelated with future diffusion coefficient. Several models with random diffusion have been suggested but without a quantitative study of the leverage effect. Our analysis lets us to fully estimate all parameters involved and allows a deeper study of correlated random diffusion models that may have practical implications for many aspects of financial markets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less
Fried, Itzhak; Koch, Christof
2014-01-01
Peristimulus time histograms are a widespread form of visualizing neuronal responses. Kernel convolution methods transform these histograms into a smooth, continuous probability density function. This provides an improved estimate of a neuron's actual response envelope. We here develop a classifier, called the h-coefficient, to determine whether time-locked fluctuations in the firing rate of a neuron should be classified as a response or as random noise. Unlike previous approaches, the h-coefficient takes advantage of the more precise response envelope estimation provided by the kernel convolution method. The h-coefficient quantizes the smoothed response envelope and calculates the probability of a response of a given shape to occur by chance. We tested the efficacy of the h-coefficient in a large data set of Monte Carlo simulated smoothed peristimulus time histograms with varying response amplitudes, response durations, trial numbers, and baseline firing rates. Across all these conditions, the h-coefficient significantly outperformed more classical classifiers, with a mean false alarm rate of 0.004 and a mean hit rate of 0.494. We also tested the h-coefficient's performance in a set of neuronal responses recorded in humans. The algorithm behind the h-coefficient provides various opportunities for further adaptation and the flexibility to target specific parameters in a given data set. Our findings confirm that the h-coefficient can provide a conservative and powerful tool for the analysis of peristimulus time histograms with great potential for future development. PMID:25475352
Resonance energy transfer process in nanogap-based dual-color random lasing
NASA Astrophysics Data System (ADS)
Shi, Xiaoyu; Tong, Junhua; Liu, Dahe; Wang, Zhaona
2017-04-01
The resonance energy transfer (RET) process between Rhodamine 6G and oxazine in the nanogap-based random systems is systematically studied by revealing the variations and fluctuations of RET coefficients with pump power density. Three working regions stable fluorescence, dynamic laser, and stable laser are thus demonstrated in the dual-color random systems. The stable RET coefficients in fluorescence and lasing regions are generally different and greatly dependent on the donor concentration and the donor-acceptor ratio. These results may provide a way to reveal the energy distribution regulars in the random system and to design the tunable multi-color coherent random lasers for colorful imaging.
NASA Astrophysics Data System (ADS)
Veselovskii, I.; Dubovik, O.; Kolgotin, A.; Lapyonok, T.; di Girolamo, P.; Summa, D.; Whiteman, D. N.; Mishchenko, M.; Tanré, D.
2010-11-01
Multiwavelength (MW) Raman lidars have demonstrated their potential to profile particle parameters; however, until now, the physical models used in retrieval algorithms for processing MW lidar data have been predominantly based on the Mie theory. This approach is applicable to the modeling of light scattering by spherically symmetric particles only and does not adequately reproduce the scattering by generally nonspherical desert dust particles. Here we present an algorithm based on a model of randomly oriented spheroids for the inversion of multiwavelength lidar data. The aerosols are modeled as a mixture of two aerosol components: one composed only of spherical and the second composed of nonspherical particles. The nonspherical component is an ensemble of randomly oriented spheroids with size-independent shape distribution. This approach has been integrated into an algorithm retrieving aerosol properties from the observations with a Raman lidar based on a tripled Nd:YAG laser. Such a lidar provides three backscattering coefficients, two extinction coefficients, and the particle depolarization ratio at a single or multiple wavelengths. Simulations were performed for a bimodal particle size distribution typical of desert dust particles. The uncertainty of the retrieved particle surface, volume concentration, and effective radius for 10% measurement errors is estimated to be below 30%. We show that if the effect of particle nonsphericity is not accounted for, the errors in the retrieved aerosol parameters increase notably. The algorithm was tested with experimental data from a Saharan dust outbreak episode, measured with the BASIL multiwavelength Raman lidar in August 2007. The vertical profiles of particle parameters as well as the particle size distributions at different heights were retrieved. It was shown that the algorithm developed provided substantially reasonable results consistent with the available independent information about the observed aerosol event.
Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients
ERIC Educational Resources Information Center
Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako
2012-01-01
Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…
Sample size calculations for stepped wedge and cluster randomised trials: a unified approach
Hemming, Karla; Taljaard, Monica
2016-01-01
Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808
NASA Astrophysics Data System (ADS)
Randrianalisoa, Jaona; Haussener, Sophia; Baillis, Dominique; Lipiński, Wojciech
2017-11-01
Radiative heat transfer is analyzed in participating media consisting of long cylindrical fibers with a diameter in the limit of geometrical optics. The absorption and scattering coefficients and the scattering phase function of the medium are determined based on the discrete-level medium geometry and optical properties of individual fibers. The fibers are assumed to be randomly oriented and positioned inside the medium. Two approaches are employed: a volume-averaged two-intensity approach referred to as multi-RTE approach and a homogenized single-intensity approach referred to as the single-RTE approach. Both approaches require effective properties, determined using direct Monte Carlo ray tracing techniques. The macroscopic radiative transfer equations (for single intensity or two volume-averaged intensities) with the corresponding effective properties are solved using Monte Carlo techniques and allow for the determination of the radiative flux distribution as well as overall transmittance and reflectance of the medium. The results are compared against predictions by the direct Monte Carlo simulation on the exact morphology. The effects of fiber volume fraction and optical properties on the effective radiative properties and the overall slab radiative characteristics are investigated. The single-RTE approach gives accurate predictions for high porosity fibrous media (porosity about 95%). The multi-RTE approach is recommended for isotropic fibrous media with porosity in the range of 79-95%.
Random field assessment of nanoscopic inhomogeneity of bone.
Dong, X Neil; Luo, Qing; Sparkman, Daniel M; Millwater, Harry R; Wang, Xiaodu
2010-12-01
Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to represent the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. Copyright © 2010 Elsevier Inc. All rights reserved.
Liu, Gaisheng; Lu, Zhiming; Zhang, Dongxiao
2007-01-01
A new approach has been developed for solving solute transport problems in randomly heterogeneous media using the Karhunen‐Loève‐based moment equation (KLME) technique proposed by Zhang and Lu (2004). The KLME approach combines the Karhunen‐Loève decomposition of the underlying random conductivity field and the perturbative and polynomial expansions of dependent variables including the hydraulic head, flow velocity, dispersion coefficient, and solute concentration. The equations obtained in this approach are sequential, and their structure is formulated in the same form as the original governing equations such that any existing simulator, such as Modular Three‐Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems (MT3DMS), can be directly applied as the solver. Through a series of two‐dimensional examples, the validity of the KLME approach is evaluated against the classical Monte Carlo simulations. Results indicate that under the flow and transport conditions examined in this work, the KLME approach provides an accurate representation of the mean concentration. For the concentration variance, the accuracy of the KLME approach is good when the conductivity variance is 0.5. As the conductivity variance increases up to 1.0, the mismatch on the concentration variance becomes large, although the mean concentration can still be accurately reproduced by the KLME approach. Our results also indicate that when the conductivity variance is relatively large, neglecting the effects of the cross terms between velocity fluctuations and local dispersivities, as done in some previous studies, can produce noticeable errors, and a rigorous treatment of the dispersion terms becomes more appropriate.
Unsupervised Metric Fusion Over Multiview Data by Graph Random Walk-Based Cross-View Diffusion.
Wang, Yang; Zhang, Wenjie; Wu, Lin; Lin, Xuemin; Zhao, Xiang
2017-01-01
Learning an ideal metric is crucial to many tasks in computer vision. Diverse feature representations may combat this problem from different aspects; as visual data objects described by multiple features can be decomposed into multiple views, thus often provide complementary information. In this paper, we propose a cross-view fusion algorithm that leads to a similarity metric for multiview data by systematically fusing multiple similarity measures. Unlike existing paradigms, we focus on learning distance measure by exploiting a graph structure of data samples, where an input similarity matrix can be improved through a propagation of graph random walk. In particular, we construct multiple graphs with each one corresponding to an individual view, and a cross-view fusion approach based on graph random walk is presented to derive an optimal distance measure by fusing multiple metrics. Our method is scalable to a large amount of data by enforcing sparsity through an anchor graph representation. To adaptively control the effects of different views, we dynamically learn view-specific coefficients, which are leveraged into graph random walk to balance multiviews. However, such a strategy may lead to an over-smooth similarity metric where affinities between dissimilar samples may be enlarged by excessively conducting cross-view fusion. Thus, we figure out a heuristic approach to controlling the iteration number in the fusion process in order to avoid over smoothness. Extensive experiments conducted on real-world data sets validate the effectiveness and efficiency of our approach.
Scattering from randomly oriented circular discs with application to vegetation
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1984-01-01
A vegetation layer is modeled by a collection of randomly oriented circular discs over a half space. The backscattering coefficient from such a half space is computed using the radiative transfer theory. It is shown that significantly different results are obtained from this theory as compared with some earlier investigations using the same modeling approach but with restricted disc orientations. In particular, the backscattered cross polarized returns cannot have a fast increasing angular trend which is inconsistent with measurements. By setting the appropriate angle of orientation to zero the theory reduces to previously published results. Comparisons are shown with measurements taken from milo, corn and wheat and good agreements are obtained for both polarized and cross polarized returns.
Scattering from randomly oriented circular discs with application to vegetation
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1983-01-01
A vegetation layer is modeled by a collection of randomly oriented circular discs over a half space. The backscattering coefficient from such a half space is computed using the radiative transfer theory. It is shown that significantly different results are obtained from this theory as compared with some earlier investigations using the same modeling approach but with restricted disc orientations. In particular, the backscattered cross-polarized returns cannot have a fast increasing angular trend which is inconsistent with measurements. By setting the appropriate angle of orientation to zero the theory reduces to previously published results. Comparisons are shown with measurements taken from milo, corn and wheat and good agreements are obtained for both polarized and cross-polarized returns.
Systematic bias of correlation coefficient may explain negative accuracy of genomic prediction.
Zhou, Yao; Vales, M Isabel; Wang, Aoxue; Zhang, Zhiwu
2017-09-01
Accuracy of genomic prediction is commonly calculated as the Pearson correlation coefficient between the predicted and observed phenotypes in the inference population by using cross-validation analysis. More frequently than expected, significant negative accuracies of genomic prediction have been reported in genomic selection studies. These negative values are surprising, given that the minimum value for prediction accuracy should hover around zero when randomly permuted data sets are analyzed. We reviewed the two common approaches for calculating the Pearson correlation and hypothesized that these negative accuracy values reflect potential bias owing to artifacts caused by the mathematical formulas used to calculate prediction accuracy. The first approach, Instant accuracy, calculates correlations for each fold and reports prediction accuracy as the mean of correlations across fold. The other approach, Hold accuracy, predicts all phenotypes in all fold and calculates correlation between the observed and predicted phenotypes at the end of the cross-validation process. Using simulated and real data, we demonstrated that our hypothesis is true. Both approaches are biased downward under certain conditions. The biases become larger when more fold are employed and when the expected accuracy is low. The bias of Instant accuracy can be corrected using a modified formula. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Wang, Gang-Jin; Xie, Chi; Chen, Shou; Yang, Jiao-Jiao; Yang, Ming-Yan
2013-09-01
In this study, we first build two empirical cross-correlation matrices in the US stock market by two different methods, namely the Pearson’s correlation coefficient and the detrended cross-correlation coefficient (DCCA coefficient). Then, combining the two matrices with the method of random matrix theory (RMT), we mainly investigate the statistical properties of cross-correlations in the US stock market. We choose the daily closing prices of 462 constituent stocks of S&P 500 index as the research objects and select the sample data from January 3, 2005 to August 31, 2012. In the empirical analysis, we examine the statistical properties of cross-correlation coefficients, the distribution of eigenvalues, the distribution of eigenvector components, and the inverse participation ratio. From the two methods, we find some new results of the cross-correlations in the US stock market in our study, which are different from the conclusions reached by previous studies. The empirical cross-correlation matrices constructed by the DCCA coefficient show several interesting properties at different time scales in the US stock market, which are useful to the risk management and optimal portfolio selection, especially to the diversity of the asset portfolio. It will be an interesting and meaningful work to find the theoretical eigenvalue distribution of a completely random matrix R for the DCCA coefficient because it does not obey the Marčenko-Pastur distribution.
Guo, L-X; Li, J; Zeng, H
2009-11-01
We present an investigation of the electromagnetic scattering from a three-dimensional (3-D) object above a two-dimensional (2-D) randomly rough surface. A Message Passing Interface-based parallel finite-difference time-domain (FDTD) approach is used, and the uniaxial perfectly matched layer (UPML) medium is adopted for truncation of the FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different number of processors is illustrated for one rough surface realization and shows that the computation time of our parallel FDTD algorithm is dramatically reduced relative to a single-processor implementation. Finally, the composite scattering coefficients versus scattered and azimuthal angle are presented and analyzed for different conditions, including the surface roughness, the dielectric constants, the polarization, and the size of the 3-D object.
Analyzing crash frequency in freeway tunnels: A correlated random parameters approach.
Hou, Qinzhong; Tarko, Andrew P; Meng, Xianghai
2018-02-01
The majority of past road safety studies focused on open road segments while only a few focused on tunnels. Moreover, the past tunnel studies produced some inconsistent results about the safety effects of the traffic patterns, the tunnel design, and the pavement conditions. The effects of these conditions therefore remain unknown, especially for freeway tunnels in China. The study presented in this paper investigated the safety effects of these various factors utilizing a four-year period (2009-2012) of data as well as three models: 1) a random effects negative binomial model (RENB), 2) an uncorrelated random parameters negative binomial model (URPNB), and 3) a correlated random parameters negative binomial model (CRPNB). Of these three, the results showed that the CRPNB model provided better goodness-of-fit and offered more insights into the factors that contribute to tunnel safety. The CRPNB was not only able to allocate the part of the otherwise unobserved heterogeneity to the individual model parameters but also was able to estimate the cross-correlations between these parameters. Furthermore, the study results showed that traffic volume, tunnel length, proportion of heavy trucks, curvature, and pavement rutting were associated with higher frequencies of traffic crashes, while the distance to the tunnel wall, distance to the adjacent tunnel, distress ratio, International Roughness Index (IRI), and friction coefficient were associated with lower crash frequencies. In addition, the effects of the heterogeneity of the proportion of heavy trucks, the curvature, the rutting depth, and the friction coefficient were identified and their inter-correlations were analyzed. Copyright © 2017 Elsevier Ltd. All rights reserved.
MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.
Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño
2013-01-01
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.
Macroscopic damping model for structural dynamics with random polycrystalline configurations
NASA Astrophysics Data System (ADS)
Yang, Yantao; Cui, Junzhi; Yu, Yifan; Xiang, Meizhen
2018-06-01
In this paper the macroscopic damping model for dynamical behavior of the structures with random polycrystalline configurations at micro-nano scales is established. First, the global motion equation of a crystal is decomposed into a set of motion equations with independent single degree of freedom (SDOF) along normal discrete modes, and then damping behavior is introduced into each SDOF motion. Through the interpolation of discrete modes, the continuous representation of damping effects for the crystal is obtained. Second, from energy conservation law the expression of the damping coefficient is derived, and the approximate formula of damping coefficient is given. Next, the continuous damping coefficient for polycrystalline cluster is expressed, the continuous dynamical equation with damping term is obtained, and then the concrete damping coefficients for a polycrystalline Cu sample are shown. Finally, by using statistical two-scale homogenization method, the macroscopic homogenized dynamical equation containing damping term for the structures with random polycrystalline configurations at micro-nano scales is set up.
Gonzalez-Vazquez, J P; Anta, Juan A; Bisquert, Juan
2009-11-28
The random walk numerical simulation (RWNS) method is used to compute diffusion coefficients for hopping transport in a fully disordered medium at finite carrier concentrations. We use Miller-Abrahams jumping rates and an exponential distribution of energies to compute the hopping times in the random walk simulation. The computed diffusion coefficient shows an exponential dependence with respect to Fermi-level and Arrhenius behavior with respect to temperature. This result indicates that there is a well-defined transport level implicit to the system dynamics. To establish the origin of this transport level we construct histograms to monitor the energies of the most visited sites. In addition, we construct "corrected" histograms where backward moves are removed. Since these moves do not contribute to transport, these histograms provide a better estimation of the effective transport level energy. The analysis of this concept in connection with the Fermi-level dependence of the diffusion coefficient and the regime of interest for the functioning of dye-sensitised solar cells is thoroughly discussed.
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi
2016-06-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA
NASA Astrophysics Data System (ADS)
Chandra, Abhijit; Chattopadhyay, Sudipta
2015-01-01
In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.
Scale-free Graphs for General Aviation Flight Schedules
NASA Technical Reports Server (NTRS)
Alexandov, Natalia M. (Technical Monitor); Kincaid, Rex K.
2003-01-01
In the late 1990s a number of researchers noticed that networks in biology, sociology, and telecommunications exhibited similar characteristics unlike standard random networks. In particular, they found that the cummulative degree distributions of these graphs followed a power law rather than a binomial distribution and that their clustering coefficients tended to a nonzero constant as the number of nodes, n, became large rather than O(1/n). Moreover, these networks shared an important property with traditional random graphs as n becomes large the average shortest path length scales with log n. This latter property has been coined the small-world property. When taken together these three properties small-world, power law, and constant clustering coefficient describe what are now most commonly referred to as scale-free networks. Since 1997 at least six books and over 400 articles have been written about scale-free networks. In this manuscript an overview of the salient characteristics of scale-free networks. Computational experience will be provided for two mechanisms that grow (dynamic) scale-free graphs. Additional computational experience will be given for constructing (static) scale-free graphs via a tabu search optimization approach. Finally, a discussion of potential applications to general aviation networks is given.
Quantum simulation of an ultrathin body field-effect transistor with channel imperfections
NASA Astrophysics Data System (ADS)
Vyurkov, V.; Semenikhin, I.; Filippov, S.; Orlikovsky, A.
2012-04-01
An efficient program for the all-quantum simulation of nanometer field-effect transistors is elaborated. The model is based on the Landauer-Buttiker approach. Our calculation of transmission coefficients employs a transfer-matrix technique involving the arbitrary precision (multiprecision) arithmetic to cope with evanescent modes. Modified in such way, the transfer-matrix technique turns out to be much faster in practical simulations than that of scattering-matrix. Results of the simulation demonstrate the impact of realistic channel imperfections (random charged centers and wall roughness) on transistor characteristics. The Landauer-Buttiker approach is developed to incorporate calculation of the noise at an arbitrary temperature. We also validate the ballistic Landauer-Buttiker approach for the usual situation when heavily doped contacts are indispensably included into the simulation region.
The influence of statistical properties of Fourier coefficients on random Gaussian surfaces.
de Castro, C P; Luković, M; Andrade, R F S; Herrmann, H J
2017-05-16
Many examples of natural systems can be described by random Gaussian surfaces. Much can be learned by analyzing the Fourier expansion of the surfaces, from which it is possible to determine the corresponding Hurst exponent and consequently establish the presence of scale invariance. We show that this symmetry is not affected by the distribution of the modulus of the Fourier coefficients. Furthermore, we investigate the role of the Fourier phases of random surfaces. In particular, we show how the surface is affected by a non-uniform distribution of phases.
Sample size requirements for the design of reliability studies: precision consideration.
Shieh, Gwowen
2014-09-01
In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.
Chapman-Enskog expansion for the Vicsek model of self-propelled particles
NASA Astrophysics Data System (ADS)
Ihle, Thomas
2016-08-01
Using the standard Vicsek model, I show how the macroscopic transport equations can be systematically derived from microscopic collision rules. The approach starts with the exact evolution equation for the N-particle probability distribution and, after making the mean-field assumption of molecular chaos, leads to a multi-particle Enskog-type equation. This equation is treated by a non-standard Chapman-Enskog expansion to extract the macroscopic behavior. The expansion includes terms up to third order in a formal expansion parameter ɛ, and involves a fast time scale. A self-consistent closure of the moment equations is presented that leads to a continuity equation for the particle density and a Navier-Stokes-like equation for the momentum density. Expressions for all transport coefficients in these macroscopic equations are given explicitly in terms of microscopic parameters of the model. The transport coefficients depend on specific angular integrals which are evaluated asymptotically in the limit of infinitely many collision partners, using an analogy to a random walk. The consistency of the Chapman-Enskog approach is checked by an independent calculation of the shear viscosity using a Green-Kubo relation.
Persistent-random-walk approach to anomalous transport of self-propelled particles
NASA Astrophysics Data System (ADS)
Sadjadi, Zeinab; Shaebani, M. Reza; Rieger, Heiko; Santen, Ludger
2015-06-01
The motion of self-propelled particles is modeled as a persistent random walk. An analytical framework is developed that allows the derivation of exact expressions for the time evolution of arbitrary moments of the persistent walk's displacement. It is shown that the interplay of step length and turning angle distributions and self-propulsion produces various signs of anomalous diffusion at short time scales and asymptotically a normal diffusion behavior with a broad range of diffusion coefficients. The crossover from the anomalous short-time behavior to the asymptotic diffusion regime is studied and the parameter dependencies of the crossover time are discussed. Higher moments of the displacement distribution are calculated and analytical expressions for the time evolution of the skewness and the kurtosis of the distribution are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Baker, Nathan A.; Li, Xiantao
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
2012-05-01
Acad. Sci. Fennicae. Ser. A. I. Math.-Phys., 1947(37):79, 1947. [65] G. E. Karniadakis, C.-H. Su, D. Xiu, D. Lucor, C. Schwab, and R. A. Todor ...treatment of uncertainties in aerodynamic design. AIAA Journal, 47(3):646–654, 2009. [106] C. Schwab and R. A. Todor . Karhunen-Loève approximation of random...integrals. Prentice-Hall Inc., Englewood Cliffs, N.J., 1971. Prentice-Hall Series in Automatic Computation. [113] R. A. Todor and C. Schwab
Bayesian dynamic modeling of time series of dengue disease case counts.
Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander
2017-07-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.
MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming
Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño
2013-01-01
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530
Diffusion and mobility of atomic particles in a liquid
NASA Astrophysics Data System (ADS)
Smirnov, B. M.; Son, E. E.; Tereshonok, D. V.
2017-11-01
The diffusion coefficient of a test atom or molecule in a liquid is determined for the mechanism where the displacement of the test molecule results from the vibrations and motion of liquid molecules surrounding the test molecule and of the test particle itself. This leads to a random change in the coordinate of the test molecule, which eventually results in the diffusion motion of the test particle in space. Two models parameters of interaction of a particle and a liquid are used to find the activation energy of the diffusion process under consideration: the gas-kinetic cross section for scattering of test molecules in the parent gas and the Wigner-Seitz radius for test molecules. In the context of this approach, we have calculated the diffusion coefficient of atoms and molecules in water, where based on experimental data, we have constructed the dependence of the activation energy for the diffusion of test molecules in water on the interaction parameter and the temperature dependence for diffusion coefficient of atoms or molecules in water within the models considered. The statistically averaged difference of the activation energies for the diffusion coefficients of different test molecules in water that we have calculated based on each of the presented models does not exceed 10% of the diffusion coefficient itself. We have considered the diffusion of clusters in water and present the dependence of the diffusion coefficient on the cluster size. The accuracy of the presented formulas for the diffusion coefficient of atomic particles in water is estimated to be 50%.
NASA Astrophysics Data System (ADS)
Nguyen Thi, T. B.; Yokoyama, A.; Ota, K.; Kodama, K.; Yamashita, K.; Isogai, Y.; Furuichi, K.; Nonomura, C.
2014-05-01
One of the most important challenges in the injection molding process of the short-glass fiber/thermoplastic composite parts is being able to predict the fiber orientation, since it controls the mechanical and the physical properties of the final parts. Folgar and Tucker included into the Jeffery equation a diffusive type of term, which introduces a phenomenological coefficient for modeling the randomizing effect of the mechanical interactions between the fibers, to predict the fiber orientation in concentrated suspensions. Their experiments indicated that this coefficient depends on the fiber volume fraction and aspect ratio. However, a definition of the fiber interaction coefficient, which is very necessary in the fiber orientation simulations, hasn't still been proven yet. Consequently, this study proposed a developed fiber interaction model that has been introduced a fiber dynamics simulation in order to obtain a global fiber interaction coefficient. This supposed that the coefficient is a sum function of the fiber concentration, aspect ratio, and angular velocity. The proposed model was incorporated into a computer aided engineering simulation package C-Mold. Short-glass fiber/polyamide-6 composites were produced in the injection molding with the fiber weight concentration of 30 wt.%, 50 wt.%, and 70 wt.%. The physical properties of these composites were examined, and their fiber orientation distributions were measured by micro-computed-tomography equipment μ-CT. The simulation results showed a good agreement with experiment results.
Clarke, Diana E; Narrow, William E; Regier, Darrel A; Kuramoto, S Janet; Kupfer, David J; Kuhl, Emily A; Greiner, Lisa; Kraemer, Helena C
2013-01-01
This article discusses the design,sampling strategy, implementation,and data analytic processes of the DSM-5 Field Trials. The DSM-5 Field Trials were conducted by using a test-retest reliability design with a stratified sampling approach across six adult and four pediatric sites in the United States and one adult site in Canada. A stratified random sampling approach was used to enhance precision in the estimation of the reliability coefficients. A web-based research electronic data capture system was used for simultaneous data collection from patients and clinicians across sites and for centralized data management.Weighted descriptive analyses, intraclass kappa and intraclass correlation coefficients for stratified samples, and receiver operating curves were computed. The DSM-5 Field Trials capitalized on advances since DSM-III and DSM-IV in statistical measures of reliability (i.e., intraclass kappa for stratified samples) and other recently developed measures to determine confidence intervals around kappa estimates. Diagnostic interviews using DSM-5 criteria were conducted by 279 clinicians of varied disciplines who received training comparable to what would be available to any clinician after publication of DSM-5.Overall, 2,246 patients with various diagnoses and levels of comorbidity were enrolled,of which over 86% were seen for two diagnostic interviews. A range of reliability coefficients were observed for the categorical diagnoses and dimensional measures. Multisite field trials and training comparable to what would be available to any clinician after publication of DSM-5 provided “real-world” testing of DSM-5 proposed diagnoses.
Testing a single regression coefficient in high dimensional linear models
Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2017-01-01
In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668
Testing a single regression coefficient in high dimensional linear models.
Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2016-11-01
In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.
Online technique for detecting state of onboard fiber optic gyroscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, Zhiyong; He, Kunpeng, E-mail: pengkhe@126.com; Pang, Shuwan
2015-02-15
Although angle random walk (ARW) of fiber optic gyroscope (FOG) has been well modeled and identified before being integrated into the high-accuracy attitude control system of satellite, aging and unexpected failures can affect the performance of FOG after launch, resulting in the variation of ARW coefficient. Therefore, the ARW coefficient can be regarded as an indicator of “state of health” for FOG diagnosis in some sense. The Allan variance method can be used to estimate ARW coefficient of FOG, however, it requires a large amount of data to be stored. Moreover, the procedure of drawing slope lines for estimation ismore » painful. To overcome the barriers, a weighted state-space model that directly models the ARW to obtain a nonlinear state-space model was established for FOG. Then, a neural extended-Kalman filter algorithm was implemented to estimate and track the variation of ARW in real time. The results of experiment show that the proposed approach is valid to detect the state of FOG. Moreover, the proposed technique effectively avoids the storage of data.« less
Estimation of regionalized compositions: A comparison of three methods
Pawlowsky, V.; Olea, R.A.; Davis, J.C.
1995-01-01
A regionalized composition is a random vector function whose components are positive and sum to a constant at every point of the sampling region. Consequently, the components of a regionalized composition are necessarily spatially correlated. This spatial dependence-induced by the constant sum constraint-is a spurious spatial correlation and may lead to misinterpretations of statistical analyses. Furthermore, the cross-covariance matrices of the regionalized composition are singular, as is the coefficient matrix of the cokriging system of equations. Three methods of performing estimation or prediction of a regionalized composition at unsampled points are discussed: (1) the direct approach of estimating each variable separately; (2) the basis method, which is applicable only when a random function is available that can he regarded as the size of the regionalized composition under study; (3) the logratio approach, using the additive-log-ratio transformation proposed by J. Aitchison, which allows statistical analysis of compositional data. We present a brief theoretical review of these three methods and compare them using compositional data from the Lyons West Oil Field in Kansas (USA). It is shown that, although there are no important numerical differences, the direct approach leads to invalid results, whereas the basis method and the additive-log-ratio approach are comparable. ?? 1995 International Association for Mathematical Geology.
"L"-Bivariate and "L"-Multivariate Association Coefficients. Research Report. ETS RR-08-40
ERIC Educational Resources Information Center
Kong, Nan; Lewis, Charles
2008-01-01
Given a system of multiple random variables, a new measure called the "L"-multivariate association coefficient is defined using (conditional) entropy. Unlike traditional correlation measures, the L-multivariate association coefficient measures the multiassociations or multirelations among the multiple variables in the given system; that…
NASA Astrophysics Data System (ADS)
Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel
2017-04-01
The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical approaches through an Observing System Simulation Experiment (OSSE) on a global scale. By changing the size of the random and systematic errors in the OSSE, we can determine the corresponding spatial and temporal resolutions at which useful flux signals could be detected from the OCO-2 data.
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
Lion, Sébastien
2009-09-07
Taking into account the interplay between spatial ecological dynamics and selection is a major challenge in evolutionary ecology. Although inclusive fitness theory has proven to be a very useful tool to unravel the interactions between spatial genetic structuring and selection, applications of the theory usually rely on simplifying demographic assumptions. In this paper, I attempt to bridge the gap between spatial demographic models and kin selection models by providing a method to compute approximations for relatedness coefficients in a spatial model with empty sites. Using spatial moment equations, I provide an approximation of nearest-neighbour relatedness on random regular networks, and show that this approximation performs much better than the ordinary pair approximation. I discuss the connection between the relatedness coefficients I define and those used in population genetics, and sketch some potential extensions of the theory.
Some variance reduction methods for numerical stochastic homogenization
Blanc, X.; Le Bris, C.; Legoll, F.
2016-01-01
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065
Modeled streamflow metrics on small, ungaged stream reaches in the Upper Colorado River Basin
Reynolds, Lindsay V.; Shafroth, Patrick B.
2016-01-20
Modeling streamflow is an important approach for understanding landscape-scale drivers of flow and estimating flows where there are no streamgage records. In this study conducted by the U.S. Geological Survey in cooperation with Colorado State University, the objectives were to model streamflow metrics on small, ungaged streams in the Upper Colorado River Basin and identify streams that are potentially threatened with becoming intermittent under drier climate conditions. The Upper Colorado River Basin is a region that is critical for water resources and also projected to experience large future climate shifts toward a drying climate. A random forest modeling approach was used to model the relationship between streamflow metrics and environmental variables. Flow metrics were then projected to ungaged reaches in the Upper Colorado River Basin using environmental variables for each stream, represented as raster cells, in the basin. Last, the projected random forest models of minimum flow coefficient of variation and specific mean daily flow were used to highlight streams that had greater than 61.84 percent minimum flow coefficient of variation and less than 0.096 specific mean daily flow and suggested that these streams will be most threatened to shift to intermittent flow regimes under drier climate conditions. Map projection products can help scientists, land managers, and policymakers understand current hydrology in the Upper Colorado River Basin and make informed decisions regarding water resources. With knowledge of which streams are likely to undergo significant drying in the future, managers and scientists can plan for stream-dependent ecosystems and human water users.
Search for Directed Networks by Different Random Walk Strategies
NASA Astrophysics Data System (ADS)
Zhu, Zi-Qi; Jin, Xiao-Ling; Huang, Zhi-Long
2012-03-01
A comparative study is carried out on the efficiency of five different random walk strategies searching on directed networks constructed based on several typical complex networks. Due to the difference in search efficiency of the strategies rooted in network clustering, the clustering coefficient in a random walker's eye on directed networks is defined and computed to be half of the corresponding undirected networks. The search processes are performed on the directed networks based on Erdös—Rényi model, Watts—Strogatz model, Barabási—Albert model and clustered scale-free network model. It is found that self-avoiding random walk strategy is the best search strategy for such directed networks. Compared to unrestricted random walk strategy, path-iteration-avoiding random walks can also make the search process much more efficient. However, no-triangle-loop and no-quadrangle-loop random walks do not improve the search efficiency as expected, which is different from those on undirected networks since the clustering coefficient of directed networks are smaller than that of undirected networks.
A Graph Theory Practice on Transformed Image: A Random Image Steganography
Thanikaiselvan, V.; Arulmozhivarman, P.; Subashanthini, S.; Amirtharajan, Rengarajan
2013-01-01
Modern day information age is enriched with the advanced network communication expertise but unfortunately at the same time encounters infinite security issues when dealing with secret and/or private information. The storage and transmission of the secret information become highly essential and have led to a deluge of research in this field. In this paper, an optimistic effort has been taken to combine graceful graph along with integer wavelet transform (IWT) to implement random image steganography for secure communication. The implementation part begins with the conversion of cover image into wavelet coefficients through IWT and is followed by embedding secret image in the randomly selected coefficients through graph theory. Finally stegoimage is obtained by applying inverse IWT. This method provides a maximum of 44 dB peak signal to noise ratio (PSNR) for 266646 bits. Thus, the proposed method gives high imperceptibility through high PSNR value and high embedding capacity in the cover image due to adaptive embedding scheme and high robustness against blind attack through graph theoretic random selection of coefficients. PMID:24453857
Biases and Standard Errors of Standardized Regression Coefficients
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Chan, Wai
2011-01-01
The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample…
ERIC Educational Resources Information Center
Frees, Edward W.; Kim, Jee-Seon
2006-01-01
Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…
Emergence of small-world structure in networks of spiking neurons through STDP plasticity.
Basalyga, Gleb; Gleiser, Pablo M; Wennekers, Thomas
2011-01-01
In this work, we use a complex network approach to investigate how a neural network structure changes under synaptic plasticity. In particular, we consider a network of conductance-based, single-compartment integrate-and-fire excitatory and inhibitory neurons. Initially the neurons are connected randomly with uniformly distributed synaptic weights. The weights of excitatory connections can be strengthened or weakened during spiking activity by the mechanism known as spike-timing-dependent plasticity (STDP). We extract a binary directed connection matrix by thresholding the weights of the excitatory connections at every simulation step and calculate its major topological characteristics such as the network clustering coefficient, characteristic path length and small-world index. We numerically demonstrate that, under certain conditions, a nontrivial small-world structure can emerge from a random initial network subject to STDP learning.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Brownian dynamics simulations on a hypersphere in 4-space
NASA Astrophysics Data System (ADS)
Nissfolk, Jarl; Ekholm, Tobias; Elvingson, Christer
2003-10-01
We describe an algorithm for performing Brownian dynamics simulations of particles diffusing on S3, a hypersphere in four dimensions. The system is chosen due to recent interest in doing computer simulations in a closed space where periodic boundary conditions can be avoided. We specifically address the question how to generate a random walk on the 3-sphere, starting from the solution of the corresponding diffusion equation, and we also discuss an efficient implementation based on controlled approximations. Since S3 is a closed manifold (space), the average square displacement during a random walk is no longer proportional to the elapsed time, as in R3. Instead, its time rate of change is continuously decreasing, and approaches zero as time becomes large. We show, however, that the effective diffusion coefficient can still be obtained from the time dependence of the square displacement.
NASA Astrophysics Data System (ADS)
Fedrigo, Melissa; Newnham, Glenn J.; Coops, Nicholas C.; Culvenor, Darius S.; Bolton, Douglas K.; Nitschke, Craig R.
2018-02-01
Light detection and ranging (lidar) data have been increasingly used for forest classification due to its ability to penetrate the forest canopy and provide detail about the structure of the lower strata. In this study we demonstrate forest classification approaches using airborne lidar data as inputs to random forest and linear unmixing classification algorithms. Our results demonstrated that both random forest and linear unmixing models identified a distribution of rainforest and eucalypt stands that was comparable to existing ecological vegetation class (EVC) maps based primarily on manual interpretation of high resolution aerial imagery. Rainforest stands were also identified in the region that have not previously been identified in the EVC maps. The transition between stand types was better characterised by the random forest modelling approach. In contrast, the linear unmixing model placed greater emphasis on field plots selected as endmembers which may not have captured the variability in stand structure within a single stand type. The random forest model had the highest overall accuracy (84%) and Cohen's kappa coefficient (0.62). However, the classification accuracy was only marginally better than linear unmixing. The random forest model was applied to a region in the Central Highlands of south-eastern Australia to produce maps of stand type probability, including areas of transition (the 'ecotone') between rainforest and eucalypt forest. The resulting map provided a detailed delineation of forest classes, which specifically recognised the coalescing of stand types at the landscape scale. This represents a key step towards mapping the structural and spatial complexity of these ecosystems, which is important for both their management and conservation.
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
NASA Astrophysics Data System (ADS)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Pérez-Sánchez, José M.; Rodríguez, Ignacio; Ruiz-Cabello, Jesús
2009-01-01
Abstract Apparent diffusion coefficient (ADC) measurement in the lung using gas magnetic resonance imaging is a promising technique with potential for reflecting changes in lung microstructure. Despite some recent impressive human applications, full interpretation of ADC measures remains an elusive goal, due to a lack of detailed knowledge about the structure dependency of ADC. In an attempt to fill this gap we have performed random walk simulations in a three-dimensional geometrical model of the lung acinus, the distal alveolated sections of the lung tree accounting for ∼90% of the total lung volume. Simulations were carried out adjusting model parameters after published morphological data for the rat peripheral airway system, which predict an ADC behavior as microstructure changes with lung inflation in partial agreement with measured ADCs at different airway pressures. The approach used to relate experimental ADCs to lung microstructural changes does not make any assumption about the cause of the changes, so it could be applied to other scenarios such as chronic obstructive pulmonary disease, lung development, etc. The work presented here predicts numerically for the first time ADC values measured in the lung from independent morphological measures of lung microstructure taken at different inflation stages during the breath cycle. PMID:19619480
Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach
Kudisthalert, Wasu
2018-01-01
Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets–Maximum Unbiased Validation Dataset–which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6. PMID:29652912
NASA Astrophysics Data System (ADS)
Nassirou, Maissarath
Thermal grooving at grain boundaries (GBs) is a capillary-driven evolution of surface topography in the region where the grain boundary emerges at a free surface. The study of these topographic changes can provide insight into surface energetics, and in our particular case, the measurement of surface diffusivity. We have measured the surface diffusion coefficient of 8mol% Y 2O3-ZrO2 by studying the formation of thermal grooves. We studied a total of five bicrystals, with well defined orientation relationships; random [110] -60°, random [001] -30°, Sigma13 [001]/{510}, Sigma13 [001]/{320}, Sigma5 [001]/{210}. Our calculations employed the Herring relation (1951), in which the variation in the chemical potential is related to changes in topography. The samples were annealed at 1300°C and 1400°C for various period of time. Atomic Force Microscopy was used to determine the exact geometry of the thermal grooves. A first approach consisted of estimating the diffusion coefficient by using Mullins' equation. yx=0= dsDs1/ 4gb2g s12G 5/4( WkTgs) 1/4t 1/4 Where y(x =0) is the groove depth at the GB triple junction, O is the atomic volume, gs is the surface tension, gb is the grain boundary surface energy, ds is the thickness of the diffusion layer, t is the annealing time, and Ds is the surface diffusion coefficient. In Mullins' derivation, the atomic structure of the surface was ignored and it was assumed that the surface energy is independent of crystallographic orientation. In the case of zirconia, the surface energy is anisotropic. We will describe in this work a new approach to measuring surface diffusivity which accounts for the surface energy anisotropy. The study of these bicrystals will emphasize the effect of grain boundary structure on the surface diffusion coefficient, and it is for that purpose that we selected bicrystals with different tilt axes and angles. The results obtained using the equation set we have developed will be compared to those obtained by Mullins, and we show that the anisotropic groove evolution, even when perfectly symmetrical, is much slower than the corresponding isotropic case.
Kaitaniemi, Pekka
2008-04-09
Allometric equations are widely used in many branches of biological science. The potential information content of the normalization constant b in allometric equations of the form Y = bX(a) has, however, remained largely neglected. To demonstrate the potential for utilizing this information, I generated a large number of artificial datasets that resembled those that are frequently encountered in biological studies, i.e., relatively small samples including measurement error or uncontrolled variation. The value of X was allowed to vary randomly within the limits describing different data ranges, and a was set to a fixed theoretical value. The constant b was set to a range of values describing the effect of a continuous environmental variable. In addition, a normally distributed random error was added to the values of both X and Y. Two different approaches were then used to model the data. The traditional approach estimated both a and b using a regression model, whereas an alternative approach set the exponent a at its theoretical value and only estimated the value of b. Both approaches produced virtually the same model fit with less than 0.3% difference in the coefficient of determination. Only the alternative approach was able to precisely reproduce the effect of the environmental variable, which was largely lost among noise variation when using the traditional approach. The results show how the value of b can be used as a source of valuable biological information if an appropriate regression model is selected.
Fluctuating Navier-Stokes equations for inelastic hard spheres or disks.
Brey, J Javier; Maynar, P; de Soria, M I García
2011-04-01
Starting from the fluctuating Boltzmann equation for smooth inelastic hard spheres or disks, closed equations for the fluctuating hydrodynamic fields to Navier-Stokes order are derived. This requires deriving constitutive relations for both the fluctuating fluxes and the correlations of the random forces. The former are identified as having the same form as the macroscopic average fluxes and involving the same transport coefficients. On the other hand, the random force terms exhibit two peculiarities as compared with their elastic limit for molecular systems. First, they are not white but have some finite relaxation time. Second, their amplitude is not determined by the macroscopic transport coefficients but involves new coefficients. ©2011 American Physical Society
The Analysis of Completely Randomized Factorial Experiments When Observations Are Lost at Random.
ERIC Educational Resources Information Center
Hummel, Thomas J.
An investigation was conducted of the characteristics of two estimation procedures and corresponding test statistics used in the analysis of completely randomized factorial experiments when observations are lost at random. For one estimator, contrast coefficients for cell means did not involve the cell frequencies. For the other, contrast…
Ultrasensitivity and sharp threshold theorems for multisite systems
NASA Astrophysics Data System (ADS)
Dougoud, M.; Mazza, C.; Vinckenbosch, L.
2017-02-01
This work studies the ultrasensitivity of multisite binding processes where ligand molecules can bind to several binding sites. It considers more particularly recent models involving complex chemical reactions in allosteric phosphorylation processes and for transcription factors and nucleosomes competing for binding on DNA. New statistics-based formulas for the Hill coefficient and the effective Hill coefficient are provided and necessary conditions for a system to be ultrasensitive are exhibited. It is first shown that the ultrasensitivity of binding processes can be approached using sharp-threshold theorems which have been developed in applied probability theory and statistical mechanics for studying sharp threshold phenomena in reliability theory, random graph theory and percolation theory. Special classes of binding process are then introduced and are described as density dependent birth and death process. New precise large deviation results for the steady state distribution of the process are obtained, which permits to show that switch-like ultrasensitive responses are strongly related to the multi-modality of the steady state distribution. Ultrasensitivity occurs if and only if the entropy of the dynamical system has more than one global minimum for some critical ligand concentration. In this case, the Hill coefficient is proportional to the number of binding sites, and the system is highly ultrasensitive. The classical effective Hill coefficient I is extended to a new cooperativity index I q , for which we recommend the computation of a broad range of values of q instead of just the standard one I = I 0.9 corresponding to the 10%-90% variation in the dose-response. It is shown that this single choice can sometimes mislead the conclusion by not detecting ultrasensitivity. This new approach allows a better understanding of multisite ultrasensitive systems and provides new tools for the design of such systems.
NASA Astrophysics Data System (ADS)
Wilson, Barry T.; Knight, Joseph F.; McRoberts, Ronald E.
2018-03-01
Imagery from the Landsat Program has been used frequently as a source of auxiliary data for modeling land cover, as well as a variety of attributes associated with tree cover. With ready access to all scenes in the archive since 2008 due to the USGS Landsat Data Policy, new approaches to deriving such auxiliary data from dense Landsat time series are required. Several methods have previously been developed for use with finer temporal resolution imagery (e.g. AVHRR and MODIS), including image compositing and harmonic regression using Fourier series. The manuscript presents a study, using Minnesota, USA during the years 2009-2013 as the study area and timeframe. The study examined the relative predictive power of land cover models, in particular those related to tree cover, using predictor variables based solely on composite imagery versus those using estimated harmonic regression coefficients. The study used two common non-parametric modeling approaches (i.e. k-nearest neighbors and random forests) for fitting classification and regression models of multiple attributes measured on USFS Forest Inventory and Analysis plots using all available Landsat imagery for the study area and timeframe. The estimated Fourier coefficients developed by harmonic regression of tasseled cap transformation time series data were shown to be correlated with land cover, including tree cover. Regression models using estimated Fourier coefficients as predictor variables showed a two- to threefold increase in explained variance for a small set of continuous response variables, relative to comparable models using monthly image composites. Similarly, the overall accuracies of classification models using the estimated Fourier coefficients were approximately 10-20 percentage points higher than the models using the image composites, with corresponding individual class accuracies between six and 45 percentage points higher.
Thermoelectricity near Anderson localization transitions
NASA Astrophysics Data System (ADS)
Yamamoto, Kaoru; Aharony, Amnon; Entin-Wohlman, Ora; Hatano, Naomichi
2017-10-01
The electronic thermoelectric coefficients are analyzed in the vicinity of one and two Anderson localization thresholds in three dimensions. For a single mobility edge, we correct and extend previous studies and find universal approximants which allow us to deduce the critical exponent for the zero-temperature conductivity from thermoelectric measurements. In particular, we find that at nonzero low temperatures the Seebeck coefficient and the thermoelectric efficiency can be very large on the "insulating" side, for chemical potentials below the (zero-temperature) localization threshold. Corrections to the leading power-law singularity in the zero-temperature conductivity are shown to introduce nonuniversal temperature-dependent corrections to the otherwise universal functions which describe the Seebeck coefficient, the figure of merit, and the Wiedemann-Franz ratio. Next, the thermoelectric coefficients are shown to have interesting dependences on the system size. While the Seebeck coefficient decreases with decreasing size, the figure of merit first decreases but then increases, while the Wiedemann-Franz ratio first increases but then decreases as the size decreases. Small (but finite) samples may thus have larger thermoelectric efficiencies. In the last part we study thermoelectricity in systems with a pair of localization edges, the ubiquitous situation in random systems near the centers of electronic energy bands. As the disorder increases, the two thresholds approach each other, and then the Seebeck coefficient and the figure of merit increase significantly, as expected from the general arguments of Mahan and Sofo [J. D. Mahan and J. O. Sofo, Proc. Natl. Acad. Sci. USA 93, 7436 (1996), 10.1073/pnas.93.15.7436] for a narrow energy range of the zero-temperature metallic behavior.
An infrared-visible image fusion scheme based on NSCT and compressed sensing
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Maldague, Xavier
2015-05-01
Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.
Generalized epidemic process on modular networks.
Chung, Kihong; Baek, Yongjoo; Kim, Daniel; Ha, Meesoon; Jeong, Hawoong
2014-05-01
Social reinforcement and modular structure are two salient features observed in the spreading of behavior through social contacts. In order to investigate the interplay between these two features, we study the generalized epidemic process on modular networks with equal-sized finite communities and adjustable modularity. Using the analytical approach originally applied to clique-based random networks, we show that the system exhibits a bond-percolation type continuous phase transition for weak social reinforcement, whereas a discontinuous phase transition occurs for sufficiently strong social reinforcement. Our findings are numerically verified using the finite-size scaling analysis and the crossings of the bimodality coefficient.
Data-driven parameterization of the generalized Langevin equation
Lei, Huan; Baker, Nathan A.; Li, Xiantao
2016-11-29
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
Physics of ultra-high bioproductivity in algal photobioreactors
NASA Astrophysics Data System (ADS)
Greenwald, Efrat; Gordon, Jeffrey M.; Zarmi, Yair
2012-04-01
Cultivating algae at high densities in thin photobioreactors engenders time scales for random cell motion that approach photosynthetic rate-limiting time scales. This synchronization allows bioproductivity above that achieved with conventional strategies. We show that a diffusion model for cell motion (1) accounts for high bioproductivity at irradiance values previously deemed restricted by photoinhibition, (2) predicts the existence of optimal culture densities and their dependence on irradiance, consistent with available data, (3) accounts for the observed degree to which mixing improves bioproductivity, and (4) provides an estimate of effective cell diffusion coefficients, in accord with independent hydrodynamic estimates.
Texturing of sodium bismuth titanate-barium titanate ceramics by templated grain growth
NASA Astrophysics Data System (ADS)
Yilmaz, Huseyin
2002-01-01
Sodium bismuth titanate modified with barium titanate, (Na1/2Bi 1/2)TiO3-BaTiO3 (NBT-BT), is a candidate lead-free piezoelectric material which has been shown to have comparatively high piezoelectric response. In this work, textured (Na1/2Bi1/2)TiO 3-BaTiO3 (5.5mol% BaTiO3) ceramics with <100> pc (where pc denotes the pseudocubic perovskite cell) orientation were fabricated by Templated Grain Growth (TGG) or Reactive Templated Grain Growth (RTGG) using anisotropically shaped template particles. In the case of TGG, molten salt synthesized SrTiO3 platelets were tape cast with a (Na1/2Bi1/2)TiO3-5.5mol%BaTiO3 powder and sintered at 1200°C for up to 12 hours. For the RTGG approach, Bi4Ti3O12 (BiT) platelets were tape cast with a Na2CO3, Bi2O3, TiO 2, and BaCO3 powder mixture and reactively sintered. The TGG approach using SrTiO3 templates gave stronger texture along [001] compared to the RTGG approach using BiT templates. The textured ceramics were characterized by X-ray and electron backscatter diffraction for the quality of texture. The texture function was quantified by the Lotgering factor, rocking curve, pole figures, inverse pole figures, and orientation imaging microscopy. Electrical and electromechanical property characterization of randomly oriented and <001>pc textured (Na1/2Bi1/2)TiO 3-5.5 mol% BaTiO3 rhombohedral ceramics showed 0.26% strain at 70 kV/cm, d33 coefficients over 500 pC/N have been obtained for highly textured samples (f ˜ 90%). The piezoelectric coefficient from Berlincourt was d33 ˜ 200 pC/N. The materials show considerable hysteresis. The presence of hysteresis in the unipolar-electric field curve is probably linked to the ferroelastic phase transition seen in the (Na 1/2Bi1/2)TiO3 system on cooling from high temperature at ˜520°C. The macroscopic physical properties (remanent polarization, dielectric constant, and piezoelectric coefficient) of random and textured ([001] pc) rhombohedral perovskites were estimated by linear averaging of single crystal data. However, the complete polarization, dielectric, and piezoelectric tensors are not available for NBT-BT single crystals. Therefore, the properties of lead based (PZT, 52/48) rhombohedral ferroelectric single domain-single crystals, whose properties (polarization, dielectric and piezoelectric) were computed using Landau-Ginsburg-Devonshire phenomenological theory (by Haun et. al.), were used in the calculations for random and textured cases. (Abstract shortened by UMI.)
Weibull crack density coefficient for polydimensional stress states
NASA Technical Reports Server (NTRS)
Gross, Bernard; Gyekenyesi, John P.
1989-01-01
A structural ceramic analysis and reliability evaluation code has recently been developed encompassing volume and surface flaw induced fracture, modeled by the two-parameter Weibull probability density function. A segment of the software involves computing the Weibull polydimensional stress state crack density coefficient from uniaxial stress experimental fracture data. The relationship of the polydimensional stress coefficient to the uniaxial stress coefficient is derived for a shear-insensitive material with a random surface flaw population.
Berendsen, Annette J; Groenier, Klaas H; de Jong, G Majella; Meyboom-de Jong, Betty; van der Veen, Willem Jan; Dekker, Janny; de Waal, Margot W M; Schuling, Jan
2009-10-01
Development and validation of a questionnaire that measures patients' experiences of collaboration between general practitioners (GPs) and specialists. A questionnaire was developed using the method of the consumer quality index and validated in a cross-sectional study among a random sample of patients referred to medical specialists in the Netherlands. Validation included factor analysis, ascertain internal consistency, and the discriminative ability. The response rate was 65% (1404 patients). Exploratory factor analysis indicated that four domains could be distinguished (i.e. GP Approach; GP Referral; Specialist; Collaboration). Cronbach's alpha coefficients ranged from 0.51 to 0.93 indicating sufficient internal consistency to make comparison of groups of respondents possible. The Pearson correlation coefficients between the domains were <0.4, except between the domains GP Approach and GP Referral. All domains clearly produced discriminating scores for groups with different characteristics. The Consumer Quality Index (CQ-index) Continuum of Care can be a useful instrument to assess aspects of the collaboration between GPs and specialists from patients' perspective. It can be used to give feedback to both medical professionals and policy makers. Such feedback creates an opportunity for implementing specific improvements and evaluating quality improvement projects. 2009 Elsevier Ireland Ltd.
A Bayesian estimate of the concordance correlation coefficient with skewed data.
Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir
2015-01-01
Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real-life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen Thi, T. B., E-mail: thanhbinh.skku@gmail.com, E-mail: yokoyama@kit.ac.jp; Yokoyama, A., E-mail: thanhbinh.skku@gmail.com, E-mail: yokoyama@kit.ac.jp; Ota, K., E-mail: kei-ota@toyobo.jp, E-mail: katsuhiro-kodama@toyobo.jp, E-mail: katsuhisa-yamashita@toyobo.jp, E-mail: yumiko-isogai@toyobo.jp, E-mail: kenji-furuichi@toyobo.jp, E-mail: chisato-nonomura@toyobo.jp
2014-05-15
One of the most important challenges in the injection molding process of the short-glass fiber/thermoplastic composite parts is being able to predict the fiber orientation, since it controls the mechanical and the physical properties of the final parts. Folgar and Tucker included into the Jeffery equation a diffusive type of term, which introduces a phenomenological coefficient for modeling the randomizing effect of the mechanical interactions between the fibers, to predict the fiber orientation in concentrated suspensions. Their experiments indicated that this coefficient depends on the fiber volume fraction and aspect ratio. However, a definition of the fiber interaction coefficient, whichmore » is very necessary in the fiber orientation simulations, hasn't still been proven yet. Consequently, this study proposed a developed fiber interaction model that has been introduced a fiber dynamics simulation in order to obtain a global fiber interaction coefficient. This supposed that the coefficient is a sum function of the fiber concentration, aspect ratio, and angular velocity. The proposed model was incorporated into a computer aided engineering simulation package C-Mold. Short-glass fiber/polyamide-6 composites were produced in the injection molding with the fiber weight concentration of 30 wt.%, 50 wt.%, and 70 wt.%. The physical properties of these composites were examined, and their fiber orientation distributions were measured by micro-computed-tomography equipment μ-CT. The simulation results showed a good agreement with experiment results.« less
Bayesian dynamic modeling of time series of dengue disease case counts
López-Quílez, Antonio; Torres-Prieto, Alexander
2017-01-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941
A Comparison of Machine Learning Approaches for Corn Yield Estimation
NASA Astrophysics Data System (ADS)
Kim, N.; Lee, Y. W.
2017-12-01
Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).
Liu, Xian; Engel, Charles C
2012-12-20
Researchers often encounter longitudinal health data characterized with three or more ordinal or nominal categories. Random-effects multinomial logit models are generally applied to account for potential lack of independence inherent in such clustered data. When parameter estimates are used to describe longitudinal processes, however, random effects, both between and within individuals, need to be retransformed for correctly predicting outcome probabilities. This study attempts to go beyond existing work by developing a retransformation method that derives longitudinal growth trajectories of unbiased health probabilities. We estimated variances of the predicted probabilities by using the delta method. Additionally, we transformed the covariates' regression coefficients on the multinomial logit function, not substantively meaningful, to the conditional effects on the predicted probabilities. The empirical illustration uses the longitudinal data from the Asset and Health Dynamics among the Oldest Old. Our analysis compared three sets of the predicted probabilities of three health states at six time points, obtained from, respectively, the retransformation method, the best linear unbiased prediction, and the fixed-effects approach. The results demonstrate that neglect of retransforming random errors in the random-effects multinomial logit model results in severely biased longitudinal trajectories of health probabilities as well as overestimated effects of covariates on the probabilities. Copyright © 2012 John Wiley & Sons, Ltd.
Min, Yu-Sun; Chang, Yongmin; Park, Jang Woo; Lee, Jong-Min; Cha, Jungho; Yang, Jin-Ju; Kim, Chul-Hyun; Hwang, Jong-Moon; Yoo, Ji-Na; Jung, Tae-Du
2015-06-01
To investigate the global functional reorganization of the brain following spinal cord injury with graph theory based approach by creating whole brain functional connectivity networks from resting state-functional magnetic resonance imaging (rs-fMRI), characterizing the reorganization of these networks using graph theoretical metrics and to compare these metrics between patients with spinal cord injury (SCI) and age-matched controls. Twenty patients with incomplete cervical SCI (14 males, 6 females; age, 55±14.1 years) and 20 healthy subjects (10 males, 10 females; age, 52.9±13.6 years) participated in this study. To analyze the characteristics of the whole brain network constructed with functional connectivity using rs-fMRI, graph theoretical measures were calculated including clustering coefficient, characteristic path length, global efficiency and small-worldness. Clustering coefficient, global efficiency and small-worldness did not show any difference between controls and SCIs in all density ranges. The normalized characteristic path length to random network was higher in SCI patients than in controls and reached statistical significance at 12%-13% of density (p<0.05, uncorrected). The graph theoretical approach in brain functional connectivity might be helpful to reveal the information processing after SCI. These findings imply that patients with SCI can build on preserved competent brain control. Further analyses, such as topological rearrangement and hub region identification, will be needed for better understanding of neuroplasticity in patients with SCI.
Willan, Andrew R
2016-07-05
The Pessary for the Prevention of Preterm Birth Study (PS3) is an international, multicenter, randomized clinical trial designed to examine the effectiveness of the Arabin pessary in preventing preterm birth in pregnant women with a short cervix. During the design of the study two methodological issues regarding power and sample size were raised. Since treatment in the Standard Arm will vary between centers, it is anticipated that so too will the probability of preterm birth in that arm. This will likely result in a treatment by center interaction, and the issue of how this will affect the sample size requirements was raised. The sample size requirements to examine the effect of the pessary on the baby's clinical outcome was prohibitively high, so the second issue is how best to examine the effect on clinical outcome. The approaches taken to address these issues are presented. Simulation and sensitivity analysis were used to address the sample size issue. The probability of preterm birth in the Standard Arm was assumed to vary between centers following a Beta distribution with a mean of 0.3 and a coefficient of variation of 0.3. To address the second issue a Bayesian decision model is proposed that combines the information regarding the between-treatment difference in the probability of preterm birth from PS3 with the data from the Multiple Courses of Antenatal Corticosteroids for Preterm Birth Study that relate preterm birth and perinatal mortality/morbidity. The approach provides a between-treatment comparison with respect to the probability of a bad clinical outcome. The performance of the approach was assessed using simulation and sensitivity analysis. Accounting for a possible treatment by center interaction increased the sample size from 540 to 700 patients per arm for the base case. The sample size requirements increase with the coefficient of variation and decrease with the number of centers. Under the same assumptions used for determining the sample size requirements, the simulated mean probability that pessary reduces the risk of perinatal mortality/morbidity is 0.98. The simulated mean decreased with coefficient of variation and increased with the number of clinical sites. Employing simulation and sensitivity analysis is a useful approach for determining sample size requirements while accounting for the additional uncertainty due to a treatment by center interaction. Using a surrogate outcome in conjunction with a Bayesian decision model is an efficient way to compare important clinical outcomes in a randomized clinical trial in situations where the direct approach requires a prohibitively high sample size.
Assessing FAO-56 dual crop coefficients using eddy covariance flux partitioning
USDA-ARS?s Scientific Manuscript database
Current approaches to scheduling crop irrigation using reference evapotranspiration (ET0) recommend using a dual-coefficient approach using basal (Kcb) and soil (Ke) coefficients along with a stress coefficient (Ks) to model crop evapotranspiration (ETc), [e.g. ETc=(Ks*Kcb+Ke)*ET0]. However, determi...
Binzoni, T; Leung, T S; Rüfenacht, D; Delpy, D T
2006-01-21
Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware.
Improving the prospects of cleavage-based nanopore sequencing engines
NASA Astrophysics Data System (ADS)
Brady, Kyle T.; Reiner, Joseph E.
2015-08-01
Recently proposed methods for DNA sequencing involve the use of cleavage-based enzymes attached to the opening of a nanopore. The idea is that DNA interacting with either an exonuclease or polymerase protein will lead to a small molecule being cleaved near the mouth of the nanopore, and subsequent entry into the pore will yield information about the DNA sequence. The prospects for this approach seem promising, but it has been shown that diffusion related effects impose a limit on the capture probability of molecules by the pore, which limits the efficacy of the technique. Here, we revisit the problem with the goal of optimizing the capture probability via a step decrease in the nucleotide diffusion coefficient between the pore and bulk solutions. It is shown through random walk simulations and a simplified analytical model that decreasing the molecule's diffusion coefficient in the bulk relative to its value in the pore increases the nucleotide capture probability. Specifically, we show that at sufficiently high applied transmembrane potentials (≥100 mV), increasing the potential by a factor f is equivalent to decreasing the diffusion coefficient ratio Dbulk/Dpore by the same factor f. This suggests a promising route toward implementation of cleavage-based sequencing protocols. We also discuss the feasibility of forming a step function in the diffusion coefficient across the pore-bulk interface.
Random matrix approach to the dynamics of stock inventory variations
NASA Astrophysics Data System (ADS)
Zhou, Wei-Xing; Mu, Guo-Hua; Kertész, János
2012-09-01
It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient Cij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ1 and λ2) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ1) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients CV R between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets.
NASA Astrophysics Data System (ADS)
Winder, Anthony J.; Siemonsen, Susanne; Flottmann, Fabian; Fiehler, Jens; Forkert, Nils D.
2017-03-01
Voxel-based tissue outcome prediction in acute ischemic stroke patients is highly relevant for both clinical routine and research. Previous research has shown that features extracted from baseline multi-parametric MRI datasets have a high predictive value and can be used for the training of classifiers, which can generate tissue outcome predictions for both intravenous and conservative treatments. However, with the recent advent and popularization of intra-arterial thrombectomy treatment, novel research specifically addressing the utility of predictive classi- fiers for thrombectomy intervention is necessary for a holistic understanding of current stroke treatment options. The aim of this work was to develop three clinically viable tissue outcome prediction models using approximate nearest-neighbor, generalized linear model, and random decision forest approaches and to evaluate the accuracy of predicting tissue outcome after intra-arterial treatment. Therefore, the three machine learning models were trained, evaluated, and compared using datasets of 42 acute ischemic stroke patients treated with intra-arterial thrombectomy. Classifier training utilized eight voxel-based features extracted from baseline MRI datasets and five global features. Evaluation of classifier-based predictions was performed via comparison to the known tissue outcome, which was determined in follow-up imaging, using the Dice coefficient and leave-on-patient-out cross validation. The random decision forest prediction model led to the best tissue outcome predictions with a mean Dice coefficient of 0.37. The approximate nearest-neighbor and generalized linear model performed equally suboptimally with average Dice coefficients of 0.28 and 0.27 respectively, suggesting that both non-linearity and machine learning are desirable properties of a classifier well-suited to the intra-arterial tissue outcome prediction problem.
Measurement of the absorption coefficient using the sound-intensity technique
NASA Technical Reports Server (NTRS)
Atwal, M.; Bernhard, R.
1984-01-01
The possibility of using the sound intensity technique to measure the absorption coefficient of a material is investigated. This technique measures the absorption coefficient by measuring the intensity incident on the sample and the net intensity reflected by the sample. Results obtained by this technique are compared with the standard techniques of measuring the change in the reverberation time and the standing wave ratio in a tube, thereby, calculating the random incident and the normal incident adsorption coefficient.
Alexanderian, Alen; Zhu, Liang; Salloum, Maher; Ma, Ronghui; Yu, Meilin
2017-09-01
In this study, statistical models are developed for modeling uncertain heterogeneous permeability and porosity in tumors, and the resulting uncertainties in pressure and velocity fields during an intratumoral injection are quantified using a nonintrusive spectral uncertainty quantification (UQ) method. Specifically, the uncertain permeability is modeled as a log-Gaussian random field, represented using a truncated Karhunen-Lòeve (KL) expansion, and the uncertain porosity is modeled as a log-normal random variable. The efficacy of the developed statistical models is validated by simulating the concentration fields with permeability and porosity of different uncertainty levels. The irregularity in the concentration field bears reasonable visual agreement with that in MicroCT images from experiments. The pressure and velocity fields are represented using polynomial chaos (PC) expansions to enable efficient computation of their statistical properties. The coefficients in the PC expansion are computed using a nonintrusive spectral projection method with the Smolyak sparse quadrature. The developed UQ approach is then used to quantify the uncertainties in the random pressure and velocity fields. A global sensitivity analysis is also performed to assess the contribution of individual KL modes of the log-permeability field to the total variance of the pressure field. It is demonstrated that the developed UQ approach can effectively quantify the flow uncertainties induced by uncertain material properties of the tumor.
Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
Active microwave remote sensing of an anisotropic random medium layer
NASA Technical Reports Server (NTRS)
Lee, J. K.; Kong, J. A.
1985-01-01
A two-layer anisotropic random medium model has been developed to study the active remote sensing of the earth. The dyadic Green's function for a two-layer anisotropic medium is developed and used in conjunction with the first-order Born approximation to calculate the backscattering coefficients. It is shown that strong cross-polarization occurs in the single scattering process and is indispensable in the interpretation of radar measurements of sea ice at different frequencies, polarizations, and viewing angles. The effects of anisotropy on the angular responses of backscattering coefficients are also illustrated.
Modeling of Thermal Phase Noise in a Solid Core Photonic Crystal Fiber-Optic Gyroscope.
Song, Ningfang; Ma, Kun; Jin, Jing; Teng, Fei; Cai, Wei
2017-10-26
A theoretical model of the thermal phase noise in a square-wave modulated solid core photonic crystal fiber-optic gyroscope has been established, and then verified by measurements. The results demonstrate a good agreement between theory and experiment. The contribution of the thermal phase noise to the random walk coefficient of the gyroscope is derived. A fiber coil with 2.8 km length is used in the experimental solid core photonic crystal fiber-optic gyroscope, showing a random walk coefficient of 9.25 × 10 -5 deg/√h.
USDA-ARS?s Scientific Manuscript database
Current approaches to scheduling crop irrigation using reference evapotranspiration (ET0) recommend using a dual-coefficient approach using basal (Kcb) and soil (Ke) coefficients along with a stress coefficient (Ks) to model crop evapotranspiration (ETc), [e.g. ETc=(Ks*Kcb+Ke)*ET0]. However, indepe...
Jiang, Xunpeng; Yang, Zengling; Han, Lujia
2014-07-01
Contaminated meat and bone meal (MBM) in animal feedstuff has been the source of bovine spongiform encephalopathy (BSE) disease in cattle, leading to a ban in its use, so methods for its detection are essential. In this study, five pure feed and five pure MBM samples were used to prepare two sets of sample arrangements: set A for investigating the discrimination of individual feed/MBM particles and set B for larger numbers of overlapping particles. The two sets were used to test a Markov random field (MRF)-based approach. A Fourier transform infrared (FT-IR) imaging system was used for data acquisition. The spatial resolution of the near-infrared (NIR) spectroscopic image was 25 μm × 25 μm. Each spectrum was the average of 16 scans across the wavenumber range 7,000-4,000 cm(-1), at intervals of 8 cm(-1). This study introduces an innovative approach to analyzing NIR spectroscopic images: an MRF-based approach has been developed using the iterated conditional mode (ICM) algorithm, integrating initial labeling-derived results from support vector machine discriminant analysis (SVMDA) and observation data derived from the results of principal component analysis (PCA). The results showed that MBM covered by feed could be successfully recognized with an overall accuracy of 86.59% and a Kappa coefficient of 0.68. Compared with conventional methods, the MRF-based approach is capable of extracting spectral information combined with spatial information from NIR spectroscopic images. This new approach enhances the identification of MBM using NIR spectroscopic imaging.
A scattering model for forested area
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1988-01-01
A forested area is modeled as a volume of randomly oriented and distributed disc-shaped, or needle-shaped leaves shading a distribution of branches modeled as randomly oriented finite-length, dielectric cylinders above an irregular soil surface. Since the radii of branches have a wide range of sizes, the model only requires the length of a branch to be large compared with its radius which may be any size relative to the incident wavelength. In addition, the model also assumes the thickness of a disc-shaped leaf or the radius of a needle-shaped leaf is much smaller than the electromagnetic wavelength. The scattering phase matrices for disc, needle, and cylinder are developed in terms of the scattering amplitudes of the corresponding fields which are computed by the forward scattering theorem. These quantities along with the Kirchoff scattering model for a randomly rough surface are used in the standard radiative transfer formulation to compute the backscattering coefficient. Numerical illustrations for the backscattering coefficient are given as a function of the shading factor, incidence angle, leaf orientation distribution, branch orientation distribution, and the number density of leaves. Also illustrated are the properties of the extinction coefficient as a function of leaf and branch orientation distributions. Comparisons are made with measured backscattering coefficients from forested areas reported in the literature.
Lesion segmentation from multimodal MRI using random forest following ischemic stroke.
Mitra, Jhimli; Bourgeat, Pierrick; Fripp, Jurgen; Ghose, Soumya; Rose, Stephen; Salvado, Olivier; Connelly, Alan; Campbell, Bruce; Palmer, Susan; Sharma, Gagan; Christensen, Soren; Carey, Leeanne
2014-09-01
Understanding structure-function relationships in the brain after stroke is reliant not only on the accurate anatomical delineation of the focal ischemic lesion, but also on previous infarcts, remote changes and the presence of white matter hyperintensities. The robust definition of primary stroke boundaries and secondary brain lesions will have significant impact on investigation of brain-behavior relationships and lesion volume correlations with clinical measures after stroke. Here we present an automated approach to identify chronic ischemic infarcts in addition to other white matter pathologies, that may be used to aid the development of post-stroke management strategies. Our approach uses Bayesian-Markov Random Field (MRF) classification to segment probable lesion volumes present on fluid attenuated inversion recovery (FLAIR) MRI. Thereafter, a random forest classification of the information from multimodal (T1-weighted, T2-weighted, FLAIR, and apparent diffusion coefficient (ADC)) MRI images and other context-aware features (within the probable lesion areas) was used to extract areas with high likelihood of being classified as lesions. The final segmentation of the lesion was obtained by thresholding the random forest probabilistic maps. The accuracy of the automated lesion delineation method was assessed in a total of 36 patients (24 male, 12 female, mean age: 64.57±14.23yrs) at 3months after stroke onset and compared with manually segmented lesion volumes by an expert. Accuracy assessment of the automated lesion identification method was performed using the commonly used evaluation metrics. The mean sensitivity of segmentation was measured to be 0.53±0.13 with a mean positive predictive value of 0.75±0.18. The mean lesion volume difference was observed to be 32.32%±21.643% with a high Pearson's correlation of r=0.76 (p<0.0001). The lesion overlap accuracy was measured in terms of Dice similarity coefficient with a mean of 0.60±0.12, while the contour accuracy was observed with a mean surface distance of 3.06mm±3.17mm. The results signify that our method was successful in identifying most of the lesion areas in FLAIR with a low false positive rate. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasnobaeva, L. A., E-mail: kla1983@mail.ru; Siberian State Medical University Moscowski Trakt 2, Tomsk, 634050; Shapovalov, A. V.
Within the formalism of the Fokker–Planck equation, the influence of nonstationary external force, random force, and dissipation effects on dynamics local conformational perturbations (kink) propagating along the DNA molecule is investigated. Such waves have an important role in the regulation of important biological processes in living systems at the molecular level. As a dynamic model of DNA was used a modified sine-Gordon equation, simulating the rotational oscillations of bases in one of the chains DNA. The equation of evolution of the kink momentum is obtained in the form of the stochastic differential equation in the Stratonovich sense within the frameworkmore » of the well-known McLaughlin and Scott energy approach. The corresponding Fokker–Planck equation for the momentum distribution function coincides with the equation describing the Ornstein–Uhlenbek process with a regular nonstationary external force. The influence of the nonlinear stochastic effects on the kink dynamics is considered with the help of the Fokker– Planck nonlinear equation with the shift coefficient dependent on the first moment of the kink momentum distribution function. Expressions are derived for average value and variance of the momentum. Examples are considered which demonstrate the influence of the external regular and random forces on the evolution of the average value and variance of the kink momentum. Within the formalism of the Fokker–Planck equation, the influence of nonstationary external force, random force, and dissipation effects on the kink dynamics is investigated in the sine–Gordon model. The equation of evolution of the kink momentum is obtained in the form of the stochastic differential equation in the Stratonovich sense within the framework of the well-known McLaughlin and Scott energy approach. The corresponding Fokker–Planck equation for the momentum distribution function coincides with the equation describing the Ornstein–Uhlenbek process with a regular nonstationary external force. The influence of the nonlinear stochastic effects on the kink dynamics is considered with the help of the Fokker–Planck nonlinear equation with the shift coefficient dependent on the first moment of the kink momentum distribution function. Expressions are derived for average value and variance of the momentum. Examples are considered which demonstrate the influence of the external regular and random forces on the evolution of the average value and variance of the kink momentum.« less
Analysis of the correlation dimension for inertial particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustavsson, Kristian; Department of Physics, Göteborg University, 41296 Gothenburg; Mehlig, Bernhard
2015-07-15
We obtain an implicit equation for the correlation dimension which describes clustering of inertial particles in a complex flow onto a fractal measure. Our general equation involves a propagator of a nonlinear stochastic process in which the velocity gradient of the fluid appears as additive noise. When the long-time limit of the propagator is considered our equation reduces to an existing large-deviation formalism from which it is difficult to extract concrete results. In the short-time limit, however, our equation reduces to a solvability condition on a partial differential equation. In the case where the inertial particles are much denser thanmore » the fluid, we show how this approach leads to a perturbative expansion of the correlation dimension, for which the coefficients can be obtained exactly and in principle to any order. We derive the perturbation series for the correlation dimension of inertial particles suspended in three-dimensional spatially smooth random flows with white-noise time correlations, obtaining the first 33 non-zero coefficients exactly.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Qingpeng; Dinan, James; Tirukkovalur, Sravya
2016-01-28
Quantum Monte Carlo (QMC) applications perform simulation with respect to an initial state of the quantum mechanical system, which is often captured by using a cubic B-spline basis. This representation is stored as a read-only table of coefficients and accesses to the table are generated at random as part of the Monte Carlo simulation. Current QMC applications, such as QWalk and QMCPACK, replicate this table at every process or node, which limits scalability because increasing the number of processors does not enable larger systems to be run. We present a partitioned global address space approach to transparently managing this datamore » using Global Arrays in a manner that allows the memory of multiple nodes to be aggregated. We develop an automated data management system that significantly reduces communication overheads, enabling new capabilities for QMC codes. Experimental results with QWalk and QMCPACK demonstrate the effectiveness of the data management system.« less
Tay, Laura; Lim, Wee Shiong; Chan, Mark; Ali, Noorhazlina; Chong, Mei Sian
2016-01-01
Gait disorders are common in early dementia, with particularly pronounced dual-task deficits, contributing to the increased fall risk and mobility decline associated with cognitive impairment. This study examines the effects of a combined cognitive stimulation and physical exercise programme (MINDVital) on gait performance under single- and dual-task conditions in older adults with mild dementia. Thirty-nine patients with early dementia participated in a multi-disciplinary rehabilitation programme comprising both physical exercise and cognitive stimulation. The programme was conducted in 8-week cycles with participants attending once weekly, and all participants completed 2 successive cycles. Cognitive, functional performance and behavioural symptoms were assessed at baseline and at the end of each 8-week cycle. Gait speed was examined under both single- (Timed Up and Go and 6-metre walk tests) and dual-task (animal category and serial counting) conditions. A random effects model was performed for the independent effect of MINDVital on the primary outcome variable of gait speed under dual-task conditions. The mean age of patients enroled in the rehabilitation programme was 79 ± 6.2 years; 25 (64.1%) had a diagnosis of Alzheimer's dementia, and 26 (66.7%) were receiving a cognitive enhancer therapy. There was a significant improvement in cognitive performance [random effects coefficient (standard error) = 0.90 (0.31), p = 0.003] and gait speed under both dual-task situations [animal category: random effects coefficient = 0.04 (0.02), p = 0.039; serial counting: random effects coefficient = 0.05 (0.02), p = 0.013], with reduced dual-task cost for gait speed [serial counting: random effects coefficient = -4.05 (2.35), p = 0.086] following successive MINDVital cycles. No significant improvement in single-task gait speed was observed. Improved cognitive performance over time was a significant determinant of changes in dual-task gait speed [random effects coefficients = 0.01 (0.005), p = 0.048, and 0.02 (0.005), p = 0.003 for category fluency and counting backwards, respectively]. A combined physical and cognitive rehabilitation programme leads to significant improvements in dual-task walking in early dementia, which may be contributed by improvement in cognitive performance, as single-task gait performance remained stable. © 2016 S. Karger AG, Basel.
Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D
2006-01-01
Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943
Kumar, Rajesh; Srivastava, Subodh; Srivastava, Rajeev
2017-07-01
For cancer detection from microscopic biopsy images, image segmentation step used for segmentation of cells and nuclei play an important role. Accuracy of segmentation approach dominate the final results. Also the microscopic biopsy images have intrinsic Poisson noise and if it is present in the image the segmentation results may not be accurate. The objective is to propose an efficient fuzzy c-means based segmentation approach which can also handle the noise present in the image during the segmentation process itself i.e. noise removal and segmentation is combined in one step. To address the above issues, in this paper a fourth order partial differential equation (FPDE) based nonlinear filter adapted to Poisson noise with fuzzy c-means segmentation method is proposed. This approach is capable of effectively handling the segmentation problem of blocky artifacts while achieving good tradeoff between Poisson noise removals and edge preservation of the microscopic biopsy images during segmentation process for cancer detection from cells. The proposed approach is tested on breast cancer microscopic biopsy data set with region of interest (ROI) segmented ground truth images. The microscopic biopsy data set contains 31 benign and 27 malignant images of size 896 × 768. The region of interest selected ground truth of all 58 images are also available for this data set. Finally, the result obtained from proposed approach is compared with the results of popular segmentation algorithms; fuzzy c-means, color k-means, texture based segmentation, and total variation fuzzy c-means approaches. The experimental results shows that proposed approach is providing better results in terms of various performance measures such as Jaccard coefficient, dice index, Tanimoto coefficient, area under curve, accuracy, true positive rate, true negative rate, false positive rate, false negative rate, random index, global consistency error, and variance of information as compared to other segmentation approaches used for cancer detection. Copyright © 2017 Elsevier B.V. All rights reserved.
Marginalized zero-altered models for longitudinal count data.
Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A
2016-10-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.
Marginalized zero-altered models for longitudinal count data
Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.
2015-01-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423
Multilevel structural equation models for assessing moderation within and across levels of analysis.
Preacher, Kristopher J; Zhang, Zhen; Zyphur, Michael J
2016-06-01
Social scientists are increasingly interested in multilevel hypotheses, data, and statistical models as well as moderation or interactions among predictors. The result is a focus on hypotheses and tests of multilevel moderation within and across levels of analysis. Unfortunately, existing approaches to multilevel moderation have a variety of shortcomings, including conflated effects across levels of analysis and bias due to using observed cluster averages instead of latent variables (i.e., "random intercepts") to represent higher-level constructs. To overcome these problems and elucidate the nature of multilevel moderation effects, we introduce a multilevel structural equation modeling (MSEM) logic that clarifies the nature of the problems with existing practices and remedies them with latent variable interactions. This remedy uses random coefficients and/or latent moderated structural equations (LMS) for unbiased tests of multilevel moderation. We describe our approach and provide an example using the publicly available High School and Beyond data with Mplus syntax in Appendix. Our MSEM method eliminates problems of conflated multilevel effects and reduces bias in parameter estimates while offering a coherent framework for conceptualizing and testing multilevel moderation effects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Analysis of longitudinal "time series" data in toxicology.
Cox, C; Cory-Slechta, D A
1987-02-01
Studies focusing on chronic toxicity or on the time course of toxicant effect often involve repeated measurements or longitudinal observations of endpoints of interest. Experimental design considerations frequently necessitate between-group comparisons of the resulting trends. Typically, procedures such as the repeated-measures analysis of variance have been used for statistical analysis, even though the required assumptions may not be satisfied in some circumstances. This paper describes an alternative analytical approach which summarizes curvilinear trends by fitting cubic orthogonal polynomials to individual profiles of effect. The resulting regression coefficients serve as quantitative descriptors which can be subjected to group significance testing. Randomization tests based on medians are proposed to provide a comparison of treatment and control groups. Examples from the behavioral toxicology literature are considered, and the results are compared to more traditional approaches, such as repeated-measures analysis of variance.
NASA Astrophysics Data System (ADS)
Wellens, Thomas; Jalabert, Rodolfo A.
2016-10-01
We develop a self-consistent theory describing the spin and spatial electron diffusion in the impurity band of doped semiconductors under the effect of a weak spin-orbit coupling. The resulting low-temperature spin-relaxation time and diffusion coefficient are calculated within different schemes of the self-consistent framework. The simplest of these schemes qualitatively reproduces previous phenomenological developments, while more elaborate calculations provide corrections that approach the values obtained in numerical simulations. The results are universal for zinc-blende semiconductors with electron conductance in the impurity band, and thus they are able to account for the measured spin-relaxation times of materials with very different physical parameters. From a general point of view, our theory opens a new perspective for describing the hopping dynamics in random quantum networks.
Pulsational stabilities of a star in thermal imbalance - Comparison between the methods
NASA Technical Reports Server (NTRS)
Vemury, S. K.
1978-01-01
The stability coefficients for quasi-adiabatic pulsations for a model in thermal imbalance are evaluated using the dynamical energy (DE) approach, the total (kinetic plus potential) energy (TE) approach, and the small amplitude (SA) approaches. From a comparison among the methods, it is found that there can exist two distinct stability coefficients under conditions of thermal imbalance as pointed out by Demaret. It is shown that both the TE approaches lead to one stability coefficient, while both the SA approaches lead to another coefficient. The coefficient obtained through the energy approaches is identified as the one which determines the stability of the velocity amplitudes. For a prenova model with a thin hydrogen-burning shell in thermal imbalance, several radial modes are found to be unstable both for radial displacements and for velocity amplitudes. However, a new kind of pulsational instability also appears, viz., while the radial displacements are unstable, the velocity amplitudes may be stabilized through the thermal imbalance terms.
PET-CT image fusion using random forest and à-trous wavelet transform.
Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo
2018-03-01
New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.
A framework to quantify uncertainties of seafloor backscatter from swath mapping echosounders
NASA Astrophysics Data System (ADS)
Malik, Mashkoor; Lurton, Xavier; Mayer, Larry
2018-06-01
Multibeam echosounders (MBES) have become a widely used acoustic remote sensing tool to map and study the seafloor, providing co-located bathymetry and seafloor backscatter. Although the uncertainty associated with MBES-derived bathymetric data has been studied extensively, the question of backscatter uncertainty has been addressed only minimally and hinders the quantitative use of MBES seafloor backscatter. This paper explores approaches to identifying uncertainty sources associated with MBES-derived backscatter measurements. The major sources of uncertainty are catalogued and the magnitudes of their relative contributions to the backscatter uncertainty budget are evaluated. These major uncertainty sources include seafloor insonified area (1-3 dB), absorption coefficient (up to > 6 dB), random fluctuations in echo level (5.5 dB for a Rayleigh distribution), and sonar calibration (device dependent). The magnitudes of these uncertainty sources vary based on how these effects are compensated for during data acquisition and processing. Various cases (no compensation, partial compensation and full compensation) for seafloor insonified area, transmission losses and random fluctuations were modeled to estimate their uncertainties in different scenarios. Uncertainty related to the seafloor insonified area can be reduced significantly by accounting for seafloor slope during backscatter processing while transmission losses can be constrained by collecting full water column absorption coefficient profiles (temperature and salinity profiles). To reduce random fluctuations to below 1 dB, at least 20 samples are recommended to be used while computing mean values. The estimation of uncertainty in backscatter measurements is constrained by the fact that not all instrumental components are characterized and documented sufficiently for commercially available MBES. Further involvement from manufacturers in providing this essential information is critically required.
Temporal variation and scale in movement-based resource selection functions
Hooten, M.B.; Hanks, E.M.; Johnson, D.S.; Alldredge, M.W.
2013-01-01
A common population characteristic of interest in animal ecology studies pertains to the selection of resources. That is, given the resources available to animals, what do they ultimately choose to use? A variety of statistical approaches have been employed to examine this question and each has advantages and disadvantages with respect to the form of available data and the properties of estimators given model assumptions. A wealth of high resolution telemetry data are now being collected to study animal population movement and space use and these data present both challenges and opportunities for statistical inference. We summarize traditional methods for resource selection and then describe several extensions to deal with measurement uncertainty and an explicit movement process that exists in studies involving high-resolution telemetry data. Our approach uses a correlated random walk movement model to obtain temporally varying use and availability distributions that are employed in a weighted distribution context to estimate selection coefficients. The temporally varying coefficients are then weighted by their contribution to selection and combined to provide inference at the population level. The result is an intuitive and accessible statistical procedure that uses readily available software and is computationally feasible for large datasets. These methods are demonstrated using data collected as part of a large-scale mountain lion monitoring study in Colorado, USA.
NASA Astrophysics Data System (ADS)
Laun, Frederik B.; Demberg, Kerstin; Nagel, Armin M.; Uder, Micheal; Kuder, Tristan A.
2017-11-01
Nuclear magnetic resonance (NMR) diffusion measurements can be used to probe porous structures or biological tissues by means of the random motion of water molecules. The short-time expansion of the diffusion coefficient in powers of sqrt(t), where t is the diffusion time related to the duration of the diffusion-weighting magnetic field gradient profile, is universally connected to structural parameters of the boundaries restricting the diffusive motion. The sqrt(t)-term is proportional to the surface to volume ratio. The t-term is related to permeability and curvature. The short time expansion can be measured with two approaches in NMR-based diffusion experiments: First, by the use of diffusion encodings of short total duration and, second, by application of oscillating gradients of long total duration. For oscillating gradients, the inverse of the oscillation frequency becomes the relevant time scale. The purpose of this manuscript is to show that the oscillating gradient approach is blind to the t-term. On the one hand, this prevents fitting of permeability and curvature measures from this term. On the other hand, the t-term does not bias the determination of the sqrt(t)-term in experiments.
Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion
NASA Astrophysics Data System (ADS)
Li, Z.; Ghaith, M.
2017-12-01
Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.
NASA Astrophysics Data System (ADS)
Valyaev, A. B.; Krivoshlykov, S. G.
1989-06-01
It is shown that the problem of investigating the mode composition of a partly coherent radiation beam in a randomly inhomogeneous medium can be reduced to a study of evolution of the energy of individual modes and of the coefficients of correlations between the modes. General expressions are obtained for the coupling coefficients of modes in a parabolic waveguide with a random microbending of the axis and an analysis is made of their evolution as a function of the excitation conditions. An estimate is obtained of the distance in which a steady-state energy distribution between the modes is established. Explicit expressions are obtained for the correlation function in the case when a waveguide is excited by off-axial Gaussian beams or Gauss-Hermite modes.
Modeling of Thermal Phase Noise in a Solid Core Photonic Crystal Fiber-Optic Gyroscope
Song, Ningfang; Ma, Kun; Jin, Jing; Teng, Fei; Cai, Wei
2017-01-01
A theoretical model of the thermal phase noise in a square-wave modulated solid core photonic crystal fiber-optic gyroscope has been established, and then verified by measurements. The results demonstrate a good agreement between theory and experiment. The contribution of the thermal phase noise to the random walk coefficient of the gyroscope is derived. A fiber coil with 2.8 km length is used in the experimental solid core photonic crystal fiber-optic gyroscope, showing a random walk coefficient of 9.25 × 10−5 deg/h. PMID:29072605
Comparison of NMR simulations of porous media derived from analytical and voxelized representations.
Jin, Guodong; Torres-Verdín, Carlos; Toumelin, Emmanuel
2009-10-01
We develop and compare two formulations of the random-walk method, grain-based and voxel-based, to simulate the nuclear-magnetic-resonance (NMR) response of fluids contained in various models of porous media. The grain-based approach uses a spherical grain pack as input, where the solid surface is analytically defined without an approximation. In the voxel-based approach, the input is a computer-tomography or computer-generated image of reconstructed porous media. Implementation of the two approaches is largely the same, except for the representation of porous media. For comparison, both approaches are applied to various analytical and digitized models of porous media: isolated spherical pore, simple cubic packing of spheres, and random packings of monodisperse and polydisperse spheres. We find that spin magnetization decays much faster in the digitized models than in their analytical counterparts. The difference in decay rate relates to the overestimation of surface area due to the discretization of the sample; it cannot be eliminated even if the voxel size decreases. However, once considering the effect of surface-area increase in the simulation of surface relaxation, good quantitative agreement is found between the two approaches. Different grain or pore shapes entail different rates of increase of surface area, whereupon we emphasize that the value of the "surface-area-corrected" coefficient may not be universal. Using an example of X-ray-CT image of Fontainebleau rock sample, we show that voxel size has a significant effect on the calculated surface area and, therefore, on the numerically simulated magnetization response.
A proposed method to investigate reliability throughout a questionnaire.
Wentzel-Larsen, Tore; Norekvål, Tone M; Ulvik, Bjørg; Nygård, Ottar; Pripp, Are H
2011-10-05
Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure--to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales.
Depression, distress and self-efficacy: The impact on diabetes self-care practices.
Devarajooh, Cassidy; Chinna, Karuthan
2017-01-01
The prevalence of type 2 diabetes is increasing in Malaysia, and people with diabetes have been reported to suffer from depression and diabetes distress which influences their self-efficacy in performing diabetes self-care practices. This interviewer administered, cross sectional study, conducted in the district of Hulu Selangor, Malaysia, involving 371 randomly selected patients with type 2 diabetes, recruited from 6 health clinics, aimed to examine a conceptual model regarding the association between depression, diabetes distress and self-efficacy with diabetes self-care practices using the partial least square approach of structural equation modeling. In this study, diabetes self-care practices were similar regardless of sex, age group, ethnicity, education level, diabetes complications or type of diabetes medication. This study found that self-efficacy had a direct effect on diabetes self-care practice (path coefficient = 0.438, p<0.001). Self-care was not directly affected by depression and diabetes distress, but indirectly by depression (path coefficient = -0.115, p<0.01) and diabetes distress (path coefficient = -0.122, p<0.001) via self-efficacy. In conclusion, to improve self-care practices, effort must be focused on enhancing self-efficacy levels, while not forgetting to deal with depression and diabetes distress, especially among those with poorer levels of self-efficacy.
Depression, distress and self-efficacy: The impact on diabetes self-care practices
2017-01-01
The prevalence of type 2 diabetes is increasing in Malaysia, and people with diabetes have been reported to suffer from depression and diabetes distress which influences their self-efficacy in performing diabetes self-care practices. This interviewer administered, cross sectional study, conducted in the district of Hulu Selangor, Malaysia, involving 371 randomly selected patients with type 2 diabetes, recruited from 6 health clinics, aimed to examine a conceptual model regarding the association between depression, diabetes distress and self-efficacy with diabetes self-care practices using the partial least square approach of structural equation modeling. In this study, diabetes self-care practices were similar regardless of sex, age group, ethnicity, education level, diabetes complications or type of diabetes medication. This study found that self-efficacy had a direct effect on diabetes self-care practice (path coefficient = 0.438, p<0.001). Self-care was not directly affected by depression and diabetes distress, but indirectly by depression (path coefficient = -0.115, p<0.01) and diabetes distress (path coefficient = -0.122, p<0.001) via self-efficacy. In conclusion, to improve self-care practices, effort must be focused on enhancing self-efficacy levels, while not forgetting to deal with depression and diabetes distress, especially among those with poorer levels of self-efficacy. PMID:28362861
Deep Learning Role in Early Diagnosis of Prostate Cancer
Reda, Islam; Khalil, Ashraf; Elmogy, Mohammed; Abou El-Fetouh, Ahmed; Shalaby, Ahmed; Abou El-Ghar, Mohamed; Elmaghraby, Adel; Ghazal, Mohammed; El-Baz, Ayman
2018-01-01
The objective of this work is to develop a computer-aided diagnostic system for early diagnosis of prostate cancer. The presented system integrates both clinical biomarkers (prostate-specific antigen) and extracted features from diffusion-weighted magnetic resonance imaging collected at multiple b values. The presented system performs 3 major processing steps. First, prostate delineation using a hybrid approach that combines a level-set model with nonnegative matrix factorization. Second, estimation and normalization of diffusion parameters, which are the apparent diffusion coefficients of the delineated prostate volumes at different b values followed by refinement of those apparent diffusion coefficients using a generalized Gaussian Markov random field model. Then, construction of the cumulative distribution functions of the processed apparent diffusion coefficients at multiple b values. In parallel, a K-nearest neighbor classifier is employed to transform the prostate-specific antigen results into diagnostic probabilities. Finally, those prostate-specific antigen–based probabilities are integrated with the initial diagnostic probabilities obtained using stacked nonnegativity constraint sparse autoencoders that employ apparent diffusion coefficient–cumulative distribution functions for better diagnostic accuracy. Experiments conducted on 18 diffusion-weighted magnetic resonance imaging data sets achieved 94.4% diagnosis accuracy (sensitivity = 88.9% and specificity = 100%), which indicate the promising results of the presented computer-aided diagnostic system. PMID:29804518
Estimation of real-time runway surface contamination using flight data recorder parameters
NASA Astrophysics Data System (ADS)
Curry, Donovan
Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.
Balasubramaniam, Krishna N; Beisner, Brianne A; Berman, Carol M; De Marco, Arianna; Duboscq, Julie; Koirala, Sabina; Majolo, Bonaventura; MacIntosh, Andrew J; McFarland, Richard; Molesti, Sandra; Ogawa, Hideshi; Petit, Odile; Schino, Gabriele; Sosa, Sebastian; Sueur, Cédric; Thierry, Bernard; de Waal, Frans B M; McCowan, Brenda
2018-01-01
Among nonhuman primates, the evolutionary underpinnings of variation in social structure remain debated, with both ancestral relationships and adaptation to current conditions hypothesized to play determining roles. Here we assess whether interspecific variation in higher-order aspects of female macaque (genus: Macaca) dominance and grooming social structure show phylogenetic signals, that is, greater similarity among more closely-related species. We use a social network approach to describe higher-order characteristics of social structure, based on both direct interactions and secondary pathways that connect group members. We also ask whether network traits covary with each other, with species-typical social style grades, and/or with sociodemographic characteristics, specifically group size, sex-ratio, and current living condition (captive vs. free-living). We assembled 34-38 datasets of female-female dyadic aggression and allogrooming among captive and free-living macaques representing 10 species. We calculated dominance (transitivity, certainty), and grooming (centrality coefficient, Newman's modularity, clustering coefficient) network traits as aspects of social structure. Computations of K statistics and randomization tests on multiple phylogenies revealed moderate-strong phylogenetic signals in dominance traits, but moderate-weak signals in grooming traits. GLMMs showed that grooming traits did not covary with dominance traits and/or social style grade. Rather, modularity and clustering coefficient, but not centrality coefficient, were strongly predicted by group size and current living condition. Specifically, larger groups showed more modular networks with sparsely-connected clusters than smaller groups. Further, this effect was independent of variation in living condition, and/or sampling effort. In summary, our results reveal that female dominance networks were more phylogenetically conserved across macaque species than grooming networks, which were more labile to sociodemographic factors. Such findings narrow down the processes that influence interspecific variation in two core aspects of macaque social structure. Future directions should include using phylogeographic approaches, and addressing challenges in examining the effects of socioecological factors on primate social structure. © 2017 Wiley Periodicals, Inc.
UNSTEADY DISPERSION IN RANDOM INTERMITTENT FLOW
The longitudinal dispersion coefficient of a conservative tracer was calculated from flow tests in a dead-end pipe loop system. Flow conditions for these tests ranged from laminar to transitional flow, and from steady to intermittent and random. Two static mixers linked in series...
Polynomials with Restricted Coefficients and Their Applications
1987-01-01
sums of exponentials of quadratics, he reduced such ýzums to exponentials of linears (geometric sums!) by simplg multiplying by their conjugates...n, the same algebraic manipulations as before lead to rn V`-~ v ie ? --8-- el4V’ .fk ts with = a+(2r+l)t, A = a+(2r+2m+l)t. To estimate the right...coefficients. These random polynomials represent the deviation in frequency response of a linear , equispaced antenna array cauised by coefficient
Anta, Juan A; Mora-Seró, Iván; Dittrich, Thomas; Bisquert, Juan
2008-08-14
We make use of the numerical simulation random walk (RWNS) method to compute the "jump" diffusion coefficient of electrons in nanostructured materials via mean-square displacement. First, a summary of analytical results is given that relates the diffusion coefficient obtained from RWNS to those in the multiple-trapping (MT) and hopping models. Simulations are performed in a three-dimensional lattice of trap sites with energies distributed according to an exponential distribution and with a step-function distribution centered at the Fermi level. It is observed that once the stationary state is reached, the ensemble of particles follow Fermi-Dirac statistics with a well-defined Fermi level. In this stationary situation the diffusion coefficient obeys the theoretical predictions so that RWNS effectively reproduces the MT model. Mobilities can be also computed when an electrical bias is applied and they are observed to comply with the Einstein relation when compared with steady-state diffusion coefficients. The evolution of the system towards the stationary situation is also studied. When the diffusion coefficients are monitored along simulation time a transition from anomalous to trap-limited transport is observed. The nature of this transition is discussed in terms of the evolution of electron distribution and the Fermi level. All these results will facilitate the use of RW simulation and related methods to interpret steady-state as well as transient experimental techniques.
Pancreas and cyst segmentation
NASA Astrophysics Data System (ADS)
Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie
2016-03-01
Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.
Linear Estimation of Particle Bulk Parameters from Multi-Wavelength Lidar Measurements
NASA Technical Reports Server (NTRS)
Veselovskii, Igor; Dubovik, Oleg; Kolgotin, A.; Korenskiy, M.; Whiteman, D. N.; Allakhverdiev, K.; Huseyinoglu, F.
2012-01-01
An algorithm for linear estimation of aerosol bulk properties such as particle volume, effective radius and complex refractive index from multiwavelength lidar measurements is presented. The approach uses the fact that the total aerosol concentration can well be approximated as a linear combination of aerosol characteristics measured by multiwavelength lidar. Therefore, the aerosol concentration can be estimated from lidar measurements without the need to derive the size distribution, which entails more sophisticated procedures. The definition of the coefficients required for the linear estimates is based on an expansion of the particle size distribution in terms of the measurement kernels. Once the coefficients are established, the approach permits fast retrieval of aerosol bulk properties when compared with the full regularization technique. In addition, the straightforward estimation of bulk properties stabilizes the inversion making it more resistant to noise in the optical data. Numerical tests demonstrate that for data sets containing three aerosol backscattering and two extinction coefficients (so called 3 + 2 ) the uncertainties in the retrieval of particle volume and surface area are below 45% when input data random uncertainties are below 20 %. Moreover, using linear estimates allows reliable retrievals even when the number of input data is reduced. To evaluate the approach, the results obtained using this technique are compared with those based on the previously developed full inversion scheme that relies on the regularization procedure. Both techniques were applied to the data measured by multiwavelength lidar at NASA/GSFC. The results obtained with both methods using the same observations are in good agreement. At the same time, the high speed of the retrieval using linear estimates makes the method preferable for generating aerosol information from extended lidar observations. To demonstrate the efficiency of the method, an extended time series of observations acquired in Turkey in May 2010 was processed using the linear estimates technique permitting, for what we believe to be the first time, temporal-height distributions of particle parameters.
Backscattering from a randomly rough dielectric surface
NASA Technical Reports Server (NTRS)
Fung, Adrian K.; Li, Zongqian; Chen, K. S.
1992-01-01
A backscattering model for scattering from a randomly rough dielectric surface is developed based on an approximate solution of a pair of integral equations for the tangential surface fields. Both like and cross-polarized scattering coefficients are obtained. It is found that the like polarized scattering coefficients contain two types of terms: single scattering terms and multiple scattering terms. The single scattering terms in like polarized scattering are shown to reduce the first-order solutions derived from the small perturbation method when the roughness parameters satisfy the slightly rough conditions. When surface roughnesses are large but the surface slope is small, only a single scattering term corresponding to the standard Kirchhoff model is significant. If the surface slope is large, the multiple scattering term will also be significant. The cross-polarized backscattering coefficients satisfy reciprocity and contain only multiple scattering terms. The difference between vertical and horizontal scattering coefficients is found to increase with the dielectric constant and is generally smaller than that predicted by the first-order small perturbation model. Good agreements are obtained between this model and measurements from statistically known surfaces.
Ross, Michelle; Wakefield, Jon
2015-10-01
Two-phase study designs are appealing since they allow for the oversampling of rare sub-populations which improves efficiency. In this paper we describe a Bayesian hierarchical model for the analysis of two-phase data. Such a model is particularly appealing in a spatial setting in which random effects are introduced to model between-area variability. In such a situation, one may be interested in estimating regression coefficients or, in the context of small area estimation, in reconstructing the population totals by strata. The efficiency gains of the two-phase sampling scheme are compared to standard approaches using 2011 birth data from the research triangle area of North Carolina. We show that the proposed method can overcome small sample difficulties and improve on existing techniques. We conclude that the two-phase design is an attractive approach for small area estimation.
Electromagnetic wave extinction within a forested canopy
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1989-01-01
A forested canopy is modeled by a collection of randomly oriented finite-length cylinders shaded by randomly oriented and distributed disk- or needle-shaped leaves. For a plane wave exciting the forested canopy, the extinction coefficient is formulated in terms of the extinction cross sections (ECSs) in the local frame of each forest component and the Eulerian angles of orientation (used to describe the orientation of each component). The ECSs in the local frame for the finite-length cylinders used to model the branches are obtained by using the forward-scattering theorem. ECSs in the local frame for the disk- and needle-shaped leaves are obtained by the summation of the absorption and scattering cross-sections. The behavior of the extinction coefficients with the incidence angle is investigated numerically for both deciduous and coniferous forest. The dependencies of the extinction coefficients on the orientation of the leaves are illustrated numerically.
Controllability of social networks and the strategic use of random information.
Cremonini, Marco; Casamassima, Francesca
2017-01-01
This work is aimed at studying realistic social control strategies for social networks based on the introduction of random information into the state of selected driver agents. Deliberately exposing selected agents to random information is a technique already experimented in recommender systems or search engines, and represents one of the few options for influencing the behavior of a social context that could be accepted as ethical, could be fully disclosed to members, and does not involve the use of force or of deception. Our research is based on a model of knowledge diffusion applied to a time-varying adaptive network and considers two well-known strategies for influencing social contexts: One is the selection of few influencers for manipulating their actions in order to drive the whole network to a certain behavior; the other, instead, drives the network behavior acting on the state of a large subset of ordinary, scarcely influencing users. The two approaches have been studied in terms of network and diffusion effects. The network effect is analyzed through the changes induced on network average degree and clustering coefficient, while the diffusion effect is based on two ad hoc metrics which are defined to measure the degree of knowledge diffusion and skill level, as well as the polarization of agent interests. The results, obtained through simulations on synthetic networks, show a rich dynamics and strong effects on the communication structure and on the distribution of knowledge and skills. These findings support our hypothesis that the strategic use of random information could represent a realistic approach to social network controllability, and that with both strategies, in principle, the control effect could be remarkable.
Fault Detection of Aircraft System with Random Forest Algorithm and Similarity Measure
Park, Wookje; Jung, Sikhang
2014-01-01
Research on fault detection algorithm was developed with the similarity measure and random forest algorithm. The organized algorithm was applied to unmanned aircraft vehicle (UAV) that was readied by us. Similarity measure was designed by the help of distance information, and its usefulness was also verified by proof. Fault decision was carried out by calculation of weighted similarity measure. Twelve available coefficients among healthy and faulty status data group were used to determine the decision. Similarity measure weighting was done and obtained through random forest algorithm (RFA); RF provides data priority. In order to get a fast response of decision, a limited number of coefficients was also considered. Relation of detection rate and amount of feature data were analyzed and illustrated. By repeated trial of similarity calculation, useful data amount was obtained. PMID:25057508
Controlling Disorder by Electric Field Directed Reconfiguration of Nanowires to Tune Random Lasing.
Donahue, Philip P; Zhang, Chenji; Nye, Nicholas; Miller, Jennifer; Wang, Cheng-Yu; Tang, Rong; Christodoulides, Demetrios; Keating, Christine D; Liu, Zhiwen
2018-06-27
Top-down fabrication is commonly used to provide positioning control of optical structures; yet, it places stringent limitations on component materials and oftentimes, dynamic reconfigurability is challenging to realize. Here we present a reconfigurable nanoparticle platform that can integrate heterogeneous particle assembly of different shapes, sizes, and material compositions. We demonstrate dynamic manipulation of disorder in this platform and use it to controllably enhance or frustrate random laser emission for a suspension of titanium dioxide nanowires in a dye solution. Using an alternating current electric field, we control the nanowire orientation to dynamically control the collective scattering of the sample and thus light confinement. Our theoretical model indicates that an increase of 22% in scattering coefficient can be achieved for the experimentally determined nanowire length distribution upon alignment. As a result, a nearly 20-fold enhancement in lasing intensity was achieved. We illustrate the generality of the approach by demonstrating enhanced lasing for aligned nanowires of other materials including gold, mixed gold/dielectric and vanadium oxide (VxOy).
Ma, Xin; Guo, Jing; Sun, Xiao
2015-01-01
The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR) method, followed by incremental feature selection (IFS). We incorporated features of conjoint triad features and three novel features: binding propensity (BP), nonbinding propensity (NBP), and evolutionary information combined with physicochemical properties (EIPP). The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient). High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.
Diabetic Erythrocytes Test by Correlation Coefficient
Korol, A.M; Foresto, P; Darrigo, M; Rosso, O.A
2008-01-01
Even when a healthy individual is studied, his/her erythrocytes in capillaries continually change their shape in a synchronized erratic fashion. In this work, the problem of characterizing the cell behavior is studied from the perspective of bounded correlated random walk, based on the assumption that diffractometric data involves both deterministic and stochastic components. The photometric readings are obtained by ektacytometry over several millions of shear elongated cells, using a home-made device called Erythrodeformeter. We have only a scalar signal and no governing equations; therefore the complete behavior has to be reconstructed in an artificial phase space. To analyze dynamics we used the technique of time delay coordinates suggested by Takens, May algorithm, and Fourier transform. The results suggest that on random-walk approach the samples from healthy controls exhibit significant differences from those from diabetic patients and these could allow us to claim that we have linked mathematical nonlinear tools with clinical aspects of diabetic erythrocytes’ rheological properties. PMID:19415139
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
Permeability of model porous medium formed by random discs
NASA Astrophysics Data System (ADS)
Gubaidullin, A. A.; Gubkin, A. S.; Igoshin, D. E.; Ignatev, P. A.
2018-03-01
Two-dimension model of the porous medium with skeleton of randomly located overlapping discs is proposed. The geometry and computational grid are built in open package Salome. Flow of Newtonian liquid in longitudinal and transverse directions is calculated and its flow rate is defined. The numerical solution of the Navier-Stokes equations for a given pressure drop at the boundaries of the area is realized in the open package OpenFOAM. Calculated value of flow rate is used for defining of permeability coefficient on the base of Darcy law. For evaluating of representativeness of computational domain the permeability coefficients in longitudinal and transverse directions are compered.
The Bayesian group lasso for confounded spatial data
Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.
2017-01-01
Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.
NASA Astrophysics Data System (ADS)
Longoria, Raul Gilberto
An experimental apparatus has been developed which can be used to generate a general time-dependent planar flow across a cylinder. A mass of water enclosed with no free surface within a square cross-section tank and two spring pre-loaded pistons is oscillated using a hydraulic actuator. A circular cylinder is suspended horizontally in the tank by two X-Y force transducers used to simultaneously measure the total in-line and transverse forces. Fluid motion is measured using a differential pressure transducer for instantaneous acceleration and an LVDT for displacement. This investigation provides measurement of forces on cylinders subjected to planar fluid flow velocity with a time (and frequency) dependence which more accurately represent the random conditions encountered in a natural ocean environment. The use of the same apparatus for both sinusoidal and random experiments provides a quantified assessment of the applicability of sinusoidal planar oscillatory flow data in offshore structure design methods. The drag and inertia coefficients for a Morison equation representation of the inline force are presented for both sinusoidal and random flow. Comparison of the sinusoidal results is favorable with those of previous investigations. The results from random experiments illustrates the difference in the force mechanism by contrasting the force transfer coefficients for the inline and transverse forces. It is found that application of sinusoidal results to random hydrodynamic inline force prediction using the Morison equation wrongly weighs the drag and inertia components, and the transverse force is overpredicted. The use of random planar oscillatory flow in the laboratory, contrasted with sinusoidal planar oscillatory flow, quantifies the accepted belief that the force transfer coefficients from sinusoidal flow experiments are conservative for prediction of forces on cylindrical structures subjected to random sea waves and the ensuing forces. Further analysis of data is conducted in the frequency domain to illustrate models used for predicting the power spectral density of the inline force including a nonlinear describing function method. It is postulated that the large-scale vortex activity prominent in sinusoidal oscillatory flow is subdued in random flow conditions.
NASA Astrophysics Data System (ADS)
Li, Jin Hua; Xu, Hui; Sun, Ting Ting; Pei, Shi Xin; Ren, Hai Dong
2018-05-01
We analyze in detail the effects of the intermode nonlinearity (IEMN) and intramode nonlinearity (IRMN) on modulation instability (MI) in randomly birefringent two-mode optical fibers (RB-TMFs). In the anomalous dispersion regime, the MI gain enhances significantly as the IEMN and IRMN coefficients increases. In the normal dispersion regime, MI can be generated without the differential mode group delay (DMGD) effect, as long as the IEMN coefficient between two distinct modes is above a critical value, or the IRMN coefficient inside a mode is below a critical value. This critical IEMN (IRMN) coefficient depends strongly on the given IRMN (IEMN) coefficient and DMGD for a given nonlinear RB-TMF structure, and is independent on the input total power, the power ratio distribution and the group velocity dispersion (GVD) ratio between the two modes. On the other hand, in contrast to the MI band arising from the pure effect of DMGD in the normal dispersion regime, where MI vanishes after a critical total power, the generated MI band under the combined effects of IEMN and IRMN without DMGD exists for any total power and enhances with the total power. The MI analysis is verified numerically by launching perturbed continuous waves (CWs) with wave propagation method.
NASA Astrophysics Data System (ADS)
Kalnin, Juris R.; Berezhkovskii, Alexander M.
2013-11-01
The Lifson-Jackson formula provides the effective free diffusion coefficient for a particle diffusing in an arbitrary one-dimensional periodic potential. Its counterpart, when the underlying dynamics is described in terms of an unbiased nearest-neighbor Markovian random walk on a one-dimensional periodic lattice is given by the formula obtained by Derrida. It is shown that the latter formula can be considered as a discretized version of the Lifson-Jackson formula with correctly chosen position-dependent diffusion coefficient.
Knox, Stephanie A; Chondros, Patty
2004-01-01
Background Cluster sample study designs are cost effective, however cluster samples violate the simple random sample assumption of independence of observations. Failure to account for the intra-cluster correlation of observations when sampling through clusters may lead to an under-powered study. Researchers therefore need estimates of intra-cluster correlation for a range of outcomes to calculate sample size. We report intra-cluster correlation coefficients observed within a large-scale cross-sectional study of general practice in Australia, where the general practitioner (GP) was the primary sampling unit and the patient encounter was the unit of inference. Methods Each year the Bettering the Evaluation and Care of Health (BEACH) study recruits a random sample of approximately 1,000 GPs across Australia. Each GP completes details of 100 consecutive patient encounters. Intra-cluster correlation coefficients were estimated for patient demographics, morbidity managed and treatments received. Intra-cluster correlation coefficients were estimated for descriptive outcomes and for associations between outcomes and predictors and were compared across two independent samples of GPs drawn three years apart. Results Between April 1999 and March 2000, a random sample of 1,047 Australian general practitioners recorded details of 104,700 patient encounters. Intra-cluster correlation coefficients for patient demographics ranged from 0.055 for patient sex to 0.451 for language spoken at home. Intra-cluster correlations for morbidity variables ranged from 0.005 for the management of eye problems to 0.059 for management of psychological problems. Intra-cluster correlation for the association between two variables was smaller than the descriptive intra-cluster correlation of each variable. When compared with the April 2002 to March 2003 sample (1,008 GPs) the estimated intra-cluster correlation coefficients were found to be consistent across samples. Conclusions The demonstrated precision and reliability of the estimated intra-cluster correlations indicate that these coefficients will be useful for calculating sample sizes in future general practice surveys that use the GP as the primary sampling unit. PMID:15613248
Inherent noise can facilitate coherence in collective swarm motion
Yates, Christian A.; Erban, Radek; Escudero, Carlos; Couzin, Iain D.; Buhl, Jerome; Kevrekidis, Ioannis G.; Maini, Philip K.; Sumpter, David J. T.
2009-01-01
Among the most striking aspects of the movement of many animal groups are their sudden coherent changes in direction. Recent observations of locusts and starlings have shown that this directional switching is an intrinsic property of their motion. Similar direction switches are seen in self-propelled particle and other models of group motion. Comprehending the factors that determine such switches is key to understanding the movement of these groups. Here, we adopt a coarse-grained approach to the study of directional switching in a self-propelled particle model assuming an underlying one-dimensional Fokker–Planck equation for the mean velocity of the particles. We continue with this assumption in analyzing experimental data on locusts and use a similar systematic Fokker–Planck equation coefficient estimation approach to extract the relevant information for the assumed Fokker–Planck equation underlying that experimental data. In the experiment itself the motion of groups of 5 to 100 locust nymphs was investigated in a homogeneous laboratory environment, helping us to establish the intrinsic dynamics of locust marching bands. We determine the mean time between direction switches as a function of group density for the experimental data and the self-propelled particle model. This systematic approach allows us to identify key differences between the experimental data and the model, revealing that individual locusts appear to increase the randomness of their movements in response to a loss of alignment by the group. We give a quantitative description of how locusts use noise to maintain swarm alignment. We discuss further how properties of individual animal behavior, inferred by using the Fokker–Planck equation coefficient estimation approach, can be implemented in the self-propelled particle model to replicate qualitatively the group level dynamics seen in the experimental data. PMID:19336580
Na, X D; Zang, S Y; Wu, C S; Li, W L
2015-11-01
Knowledge of the spatial extent of forested wetlands is essential to many studies including wetland functioning assessment, greenhouse gas flux estimation, and wildlife suitable habitat identification. For discriminating forested wetlands from their adjacent land cover types, researchers have resorted to image analysis techniques applied to numerous remotely sensed data. While with some success, there is still no consensus on the optimal approaches for mapping forested wetlands. To address this problem, we examined two machine learning approaches, random forest (RF) and K-nearest neighbor (KNN) algorithms, and applied these two approaches to the framework of pixel-based and object-based classifications. The RF and KNN algorithms were constructed using predictors derived from Landsat 8 imagery, Radarsat-2 advanced synthetic aperture radar (SAR), and topographical indices. The results show that the objected-based classifications performed better than per-pixel classifications using the same algorithm (RF) in terms of overall accuracy and the difference of their kappa coefficients are statistically significant (p<0.01). There were noticeably omissions for forested and herbaceous wetlands based on the per-pixel classifications using the RF algorithm. As for the object-based image analysis, there were also statistically significant differences (p<0.01) of Kappa coefficient between results performed based on RF and KNN algorithms. The object-based classification using RF provided a more visually adequate distribution of interested land cover types, while the object classifications based on the KNN algorithm showed noticeably commissions for forested wetlands and omissions for agriculture land. This research proves that the object-based classification with RF using optical, radar, and topographical data improved the mapping accuracy of land covers and provided a feasible approach to discriminate the forested wetlands from the other land cover types in forestry area.
Performance evaluation of an automatic MGRF-based lung segmentation approach
NASA Astrophysics Data System (ADS)
Soliman, Ahmed; Khalifa, Fahmi; Alansary, Amir; Gimel'farb, Georgy; El-Baz, Ayman
2013-10-01
The segmentation of the lung tissues in chest Computed Tomography (CT) images is an important step for developing any Computer-Aided Diagnostic (CAD) system for lung cancer and other pulmonary diseases. In this paper, we introduce a new framework for validating the accuracy of our developed Joint Markov-Gibbs based lung segmentation approach using 3D realistic synthetic phantoms. These phantoms are created using a 3D Generalized Gauss-Markov Random Field (GGMRF) model of voxel intensities with pairwise interaction to model the 3D appearance of the lung tissues. Then, the appearance of the generated 3D phantoms is simulated based on iterative minimization of an energy function that is based on the learned 3D-GGMRF image model. These 3D realistic phantoms can be used to evaluate the performance of any lung segmentation approach. The performance of our segmentation approach is evaluated using three metrics, namely, the Dice Similarity Coefficient (DSC), the modified Hausdorff distance, and the Average Volume Difference (AVD) between our segmentation and the ground truth. Our approach achieves mean values of 0.994±0.003, 8.844±2.495 mm, and 0.784±0.912 mm3, for the DSC, Hausdorff distance, and the AVD, respectively.
Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai
2015-08-10
Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.
A Solar Radiation Parameterization for Atmospheric Studies. Volume 15
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J. (Editor)
1999-01-01
The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.
Estimation of the Thermal Process in the Honeycomb Panel by a Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gusev, S. A.; Nikolaev, V. N.
2018-01-01
A new Monte Carlo method for estimating the thermal state of the heat insulation containing honeycomb panels is proposed in the paper. The heat transfer in the honeycomb panel is described by a boundary value problem for a parabolic equation with discontinuous diffusion coefficient and boundary conditions of the third kind. To obtain an approximate solution, it is proposed to use the smoothing of the diffusion coefficient. After that, the obtained problem is solved on the basis of the probability representation. The probability representation is the expectation of the functional of the diffusion process corresponding to the boundary value problem. The process of solving the problem is reduced to numerical statistical modelling of a large number of trajectories of the diffusion process corresponding to the parabolic problem. It was used earlier the Euler method for this object, but that requires a large computational effort. In this paper the method is modified by using combination of the Euler and the random walk on moving spheres methods. The new approach allows us to significantly reduce the computation costs.
A Geometrical Framework for Covariance Matrices of Continuous and Categorical Variables
ERIC Educational Resources Information Center
Vernizzi, Graziano; Nakai, Miki
2015-01-01
It is well known that a categorical random variable can be represented geometrically by a simplex. Accordingly, several measures of association between categorical variables have been proposed and discussed in the literature. Moreover, the standard definitions of covariance and correlation coefficient for continuous random variables have been…
Progress in low-resolution ab initio phasing with CrowdPhase
Jorda, Julien; Sawaya, Michael R.; Yeates, Todd O.
2016-03-01
Ab initio phasing by direct computational methods in low-resolution X-ray crystallography is a long-standing challenge. A common approach is to consider it as two subproblems: sampling of phase space and identification of the correct solution. While the former is amenable to a myriad of search algorithms, devising a reliable target function for the latter problem remains an open question. Here, recent developments in CrowdPhase, a collaborative online game powered by a genetic algorithm that evolves an initial population of individuals with random genetic make-up ( i.e. random phases) each expressing a phenotype in the form of an electron-density map, aremore » presented. Success relies on the ability of human players to visually evaluate the quality of these maps and, following a Darwinian survival-of-the-fittest concept, direct the search towards optimal solutions. While an initial study demonstrated the feasibility of the approach, some important crystallographic issues were overlooked for the sake of simplicity. To address these, the new CrowdPhase includes consideration of space-group symmetry, a method for handling missing amplitudes, the use of a map correlation coefficient as a quality metric and a solvent-flattening step. Lastly, performances of this installment are discussed for two low-resolution test cases based on bona fide diffraction data.« less
Progress in low-resolution ab initio phasing with CrowdPhase
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorda, Julien; Sawaya, Michael R.; Yeates, Todd O.
Ab initio phasing by direct computational methods in low-resolution X-ray crystallography is a long-standing challenge. A common approach is to consider it as two subproblems: sampling of phase space and identification of the correct solution. While the former is amenable to a myriad of search algorithms, devising a reliable target function for the latter problem remains an open question. Here, recent developments in CrowdPhase, a collaborative online game powered by a genetic algorithm that evolves an initial population of individuals with random genetic make-up ( i.e. random phases) each expressing a phenotype in the form of an electron-density map, aremore » presented. Success relies on the ability of human players to visually evaluate the quality of these maps and, following a Darwinian survival-of-the-fittest concept, direct the search towards optimal solutions. While an initial study demonstrated the feasibility of the approach, some important crystallographic issues were overlooked for the sake of simplicity. To address these, the new CrowdPhase includes consideration of space-group symmetry, a method for handling missing amplitudes, the use of a map correlation coefficient as a quality metric and a solvent-flattening step. Lastly, performances of this installment are discussed for two low-resolution test cases based on bona fide diffraction data.« less
Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza
2017-09-27
Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.
Welch, Catherine A; Petersen, Irene; Bartlett, Jonathan W; White, Ian R; Marston, Louise; Morris, Richard W; Nazareth, Irwin; Walters, Kate; Carpenter, James
2014-01-01
Most implementations of multiple imputation (MI) of missing data are designed for simple rectangular data structures ignoring temporal ordering of data. Therefore, when applying MI to longitudinal data with intermittent patterns of missing data, some alternative strategies must be considered. One approach is to divide data into time blocks and implement MI independently at each block. An alternative approach is to include all time blocks in the same MI model. With increasing numbers of time blocks, this approach is likely to break down because of co-linearity and over-fitting. The new two-fold fully conditional specification (FCS) MI algorithm addresses these issues, by only conditioning on measurements, which are local in time. We describe and report the results of a novel simulation study to critically evaluate the two-fold FCS algorithm and its suitability for imputation of longitudinal electronic health records. After generating a full data set, approximately 70% of selected continuous and categorical variables were made missing completely at random in each of ten time blocks. Subsequently, we applied a simple time-to-event model. We compared efficiency of estimated coefficients from a complete records analysis, MI of data in the baseline time block and the two-fold FCS algorithm. The results show that the two-fold FCS algorithm maximises the use of data available, with the gain relative to baseline MI depending on the strength of correlations within and between variables. Using this approach also increases plausibility of the missing at random assumption by using repeated measures over time of variables whose baseline values may be missing. PMID:24782349
Ghiglietti, Andrea; Scarale, Maria Giovanna; Miceli, Rosalba; Ieva, Francesca; Mariani, Luigi; Gavazzi, Cecilia; Paganoni, Anna Maria; Edefonti, Valeria
2018-03-22
Recently, response-adaptive designs have been proposed in randomized clinical trials to achieve ethical and/or cost advantages by using sequential accrual information collected during the trial to dynamically update the probabilities of treatment assignments. In this context, urn models-where the probability to assign patients to treatments is interpreted as the proportion of balls of different colors available in a virtual urn-have been used as response-adaptive randomization rules. We propose the use of Randomly Reinforced Urn (RRU) models in a simulation study based on a published randomized clinical trial on the efficacy of home enteral nutrition in cancer patients after major gastrointestinal surgery. We compare results with the RRU design with those previously published with the non-adaptive approach. We also provide a code written with the R software to implement the RRU design in practice. In detail, we simulate 10,000 trials based on the RRU model in three set-ups of different total sample sizes. We report information on the number of patients allocated to the inferior treatment and on the empirical power of the t-test for the treatment coefficient in the ANOVA model. We carry out a sensitivity analysis to assess the effect of different urn compositions. For each sample size, in approximately 75% of the simulation runs, the number of patients allocated to the inferior treatment by the RRU design is lower, as compared to the non-adaptive design. The empirical power of the t-test for the treatment effect is similar in the two designs.
NASA Astrophysics Data System (ADS)
Most, S.; Jia, N.; Bijeljic, B.; Nowak, W.
2016-12-01
Pre-asymptotic characteristics are almost ubiquitous when analyzing solute transport processes in porous media. These pre-asymptotic aspects are caused by spatial coherence in the velocity field and by its heterogeneity. For the Lagrangian perspective of particle displacements, the causes of pre-asymptotic, non-Fickian transport are skewed velocity distribution, statistical dependencies between subsequent increments of particle positions (memory) and dependence between the x, y and z-components of particle increments. Valid simulation frameworks should account for these factors. We propose a particle tracking random walk (PTRW) simulation technique that can use empirical pore-space velocity distributions as input, enforces memory between subsequent random walk steps, and considers cross dependence. Thus, it is able to simulate pre-asymptotic non-Fickian transport phenomena. Our PTRW framework contains an advection/dispersion term plus a diffusion term. The advection/dispersion term produces time-series of particle increments from the velocity CDFs. These time series are equipped with memory by enforcing that the CDF values of subsequent velocities change only slightly. The latter is achieved through a random walk on the axis of CDF values between 0 and 1. The virtual diffusion coefficient for that random walk is our only fitting parameter. Cross-dependence can be enforced by constraining the random walk to certain combinations of CDF values between the three velocity components in x, y and z. We will show that this modelling framework is capable of simulating non-Fickian transport by comparison with a pore-scale transport simulation and we analyze the approach to asymptotic behavior.
NASA Astrophysics Data System (ADS)
Camporesi, Roberto
2011-06-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and the variation of constants method. The approach presented here can be used in a first course on differential equations for science and engineering majors.
Reliability and variability of day-to-day vault training measures in artistic gymnastics.
Bradshaw, Elizabeth; Hume, Patria; Calton, Mark; Aisbett, Brad
2010-06-01
Inter-day training reliability and variability in artistic gymnastics vaulting was determined using a customised infra-red timing gate and contact mat timing system. Thirteen Australian high performance gymnasts (eight males and five females) aged 11-23 years were assessed during two consecutive days of normal training. Each gymnast completed a number of vault repetitions per daily session. Inter-day variability of vault run-up velocities (at -18 to -12 m, -12 to -6 m, -6 to -2 m, and -2 to 0 m from the nearest edge of the beat board), and board contact, pre-flight, and table contact times were determined using mixed modelling statistics to account for random (within-subject variability) and fixed effects (gender, number of subjects, number of trials). The difference in the mean (Mdiff) and Cohen's effect sizes for reliability assessment and intra-class correlation coefficients, and the coefficient of variation percentage (CV%) were calculated for variability assessment. Approach velocity (-18 to -2m, CV = 2.4-7.8%) and board contact time (CV = 3.5%) were less variable measures when accounting for day-to-day performance differences, than pre-flight time (CV = 17.7%) and table contact time (CV = 20.5%). While pre-flight and table contact times are relevant training measures, approach velocity and board contact time are more reliable when quantifying vaulting performance.
NASA Astrophysics Data System (ADS)
Jorda, Helena; Koestel, John; Jarvis, Nicholas
2014-05-01
Knowledge of the near-saturated and saturated hydraulic conductivity of soil is fundamental for understanding important processes like groundwater contamination risks or runoff and soil erosion. Hydraulic conductivities are however difficult and time-consuming to determine by direct measurements, especially at the field scale or larger. So far, pedotransfer functions do not offer an especially reliable alternative since published approaches exhibit poor prediction performances. In our study we aimed at building pedotransfer functions by growing random forests (a statistical learning approach) on 486 datasets from the meta-database on tension-disk infiltrometer measurements collected from peer-reviewed literature and recently presented by Jarvis et al. (2013, Influence of soil, land use and climatic factors on the hydraulic conductivity of soil. Hydrol. Earth Syst. Sci. 17(12), 5185-5195). When some data from a specific source publication were allowed to enter the training set whereas others were used for validation, the results of a 10-fold cross-validation showed reasonable coefficients of determination of 0.53 for hydraulic conductivity at 10 cm tension, K10, and 0.41 for saturated conductivity, Ks. The estimated average annual temperature and precipitation at the site were the most important predictors for K10, while bulk density and estimated average annual temperature were most important for Ks prediction. The soil organic carbon content and the diameter of the disk infiltrometer were also important for the prediction of both K10 and Ks. However, coefficients of determination were around zero when all datasets of a specific source publication were excluded from the training set and exclusively used for validation. This may indicate experimenter bias, or that better predictors have to be found or that a larger dataset has to be used to infer meaningful pedotransfer functions for saturated and near-saturated hydraulic conductivities. More research is in progress to further elucidate this question.
Sharma, Ashok K; Srivastava, Gopal N; Roy, Ankita; Sharma, Vineet K
2017-01-01
The experimental methods for the prediction of molecular toxicity are tedious and time-consuming tasks. Thus, the computational approaches could be used to develop alternative methods for toxicity prediction. We have developed a tool for the prediction of molecular toxicity along with the aqueous solubility and permeability of any molecule/metabolite. Using a comprehensive and curated set of toxin molecules as a training set, the different chemical and structural based features such as descriptors and fingerprints were exploited for feature selection, optimization and development of machine learning based classification and regression models. The compositional differences in the distribution of atoms were apparent between toxins and non-toxins, and hence, the molecular features were used for the classification and regression. On 10-fold cross-validation, the descriptor-based, fingerprint-based and hybrid-based classification models showed similar accuracy (93%) and Matthews's correlation coefficient (0.84). The performances of all the three models were comparable (Matthews's correlation coefficient = 0.84-0.87) on the blind dataset. In addition, the regression-based models using descriptors as input features were also compared and evaluated on the blind dataset. Random forest based regression model for the prediction of solubility performed better ( R 2 = 0.84) than the multi-linear regression (MLR) and partial least square regression (PLSR) models, whereas, the partial least squares based regression model for the prediction of permeability (caco-2) performed better ( R 2 = 0.68) in comparison to the random forest and MLR based regression models. The performance of final classification and regression models was evaluated using the two validation datasets including the known toxins and commonly used constituents of health products, which attests to its accuracy. The ToxiM web server would be a highly useful and reliable tool for the prediction of toxicity, solubility, and permeability of small molecules.
Sharma, Ashok K.; Srivastava, Gopal N.; Roy, Ankita; Sharma, Vineet K.
2017-01-01
The experimental methods for the prediction of molecular toxicity are tedious and time-consuming tasks. Thus, the computational approaches could be used to develop alternative methods for toxicity prediction. We have developed a tool for the prediction of molecular toxicity along with the aqueous solubility and permeability of any molecule/metabolite. Using a comprehensive and curated set of toxin molecules as a training set, the different chemical and structural based features such as descriptors and fingerprints were exploited for feature selection, optimization and development of machine learning based classification and regression models. The compositional differences in the distribution of atoms were apparent between toxins and non-toxins, and hence, the molecular features were used for the classification and regression. On 10-fold cross-validation, the descriptor-based, fingerprint-based and hybrid-based classification models showed similar accuracy (93%) and Matthews's correlation coefficient (0.84). The performances of all the three models were comparable (Matthews's correlation coefficient = 0.84–0.87) on the blind dataset. In addition, the regression-based models using descriptors as input features were also compared and evaluated on the blind dataset. Random forest based regression model for the prediction of solubility performed better (R2 = 0.84) than the multi-linear regression (MLR) and partial least square regression (PLSR) models, whereas, the partial least squares based regression model for the prediction of permeability (caco-2) performed better (R2 = 0.68) in comparison to the random forest and MLR based regression models. The performance of final classification and regression models was evaluated using the two validation datasets including the known toxins and commonly used constituents of health products, which attests to its accuracy. The ToxiM web server would be a highly useful and reliable tool for the prediction of toxicity, solubility, and permeability of small molecules. PMID:29249969
Recursive formulas for the partial fraction expansion of a rational function with multiple poles.
NASA Technical Reports Server (NTRS)
Chang, F.-C.
1973-01-01
The coefficients in the partial fraction expansion considered are given by Heaviside's formula. The evaluation of the coefficients involves the differential of a quotient of two polynomials. A simplified approach for the evaluation of the coefficients is discussed. Leibniz rule is applied and a recurrence formula is derived. A coefficient can also be determined from a system of simultaneous equations. Practical methods for the performance of the computational operations involved in both approaches are considered.
Xu, Jia; Li, Chao; Li, Yiran; Lim, Chee Wah; Zhu, Zhiwen
2018-05-04
In this paper, a kind of single-walled carbon nanotube nonlinear model is developed and the strongly nonlinear dynamic characteristics of such carbon nanotubes subjected to random magnetic field are studied. The nonlocal effect of the microstructure is considered based on Eringen’s differential constitutive model. The natural frequency of the strongly nonlinear dynamic system is obtained by the energy function method, the drift coefficient and the diffusion coefficient are verified. The stationary probability density function of the system dynamic response is given and the fractal boundary of the safe basin is provided. Theoretical analysis and numerical simulation show that stochastic resonance occurs when varying the random magnetic field intensity. The boundary of safe basin has fractal characteristics and the area of safe basin decreases when the intensity of the magnetic field permeability increases.
NASA Astrophysics Data System (ADS)
Sun, Y.; Ditmar, P.; Riva, R.
2016-12-01
Time-varying gravity field solutions of the GRACE satellite mission enable an observation of Earth's mass transport on a monthly basis since 2002. One of the remaining challenges is how to complement these solutions with sufficiently accurate estimates of very low-degree spherical harmonic coefficients, particularly degree-1 coefficients and C20. An absence or inaccurate estimation of these coefficients may result in strong biases in mass transports estimates. Variations in degree-1 coefficients reflect geocenter motion and variations in the C20coefficients describe changes in the Earth's dynamic oblateness (ΔJ2). In this study, we developed a novel methodology to estimate monthly variations in degree-1 and C20coefficients by combing GRACE data with oceanic mass anomalies (combination approach). Unlike the method by Swenson et al. (2008), the proposed approach exploits noise covariance information of both input datasets and thus produces stochastically optimal solutions. A numerical simulation study is carried out to verify the correctness and performance of the proposed approach. We demonstrate that solutions obtained with the proposed approach have a significantly higher quality, as compared to the method by Swenson et al. Finally, we apply the proposed approach to real monthly GRACE solutions. To evaluate the obtained results, we calculate mass transport time-series over selected regions where minimal mass anomalies are expected. A clear reduction in the RMS of the mass transport time-series (more than 50 %) is observed there when the degree-1 and C20 coefficients obtained with the proposed approach are used. In particular, the seasonal pattern in the mass transport time-series disappears almost entirely. The traditional approach (degree-1 coefficients based on Swenson et al. (2008) and C20 based on SLR data), in contrast, does not reduce that RMS or even makes it larger (e.g., over the Sahara desert). We further show that the degree-1 variations play a major role in the observed improvement. At the same time, the usage of the C20 solutions obtained with the combination approach yields a similar accuracy of mass anomaly estimates, as compared to the results based on SLR analysis. The computed degree-1 and C20 coefficients will be made publicly available.
The recurrence coefficients of semi-classical Laguerre polynomials and the fourth Painlevé equation
NASA Astrophysics Data System (ADS)
Filipuk, Galina; Van Assche, Walter; Zhang, Lun
2012-05-01
We show that the coefficients of the three-term recurrence relation for orthogonal polynomials with respect to a semi-classical extension of the Laguerre weight satisfy the fourth Painlevé equation when viewed as functions of one of the parameters in the weight. We compare different approaches to derive this result, namely, the ladder operators approach, the isomonodromy deformations approach and combining the Toda system for the recurrence coefficients with a discrete equation. We also discuss a relation between the recurrence coefficients for the Freud weight and the semi-classical Laguerre weight and show how it arises from the Bäcklund transformation of the fourth Painlevé equation.
Random matrix approach to cross correlations in financial data
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene
2002-06-01
We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices
A proposed method to investigate reliability throughout a questionnaire
2011-01-01
Background Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. Methods A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. Results The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure - to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Conclusions Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales. PMID:21974842
Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A
2016-09-06
Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tang, Chuanzi; Ren, Hongmei; Bo, Li; Jing, Huang
2017-11-01
In radar target recognition, the micro motion characteristics of target is one of the characteristics that researchers pay attention to at home and abroad, in which the characteristics of target precession cycle is one of the important characteristics of target movement characteristics. Periodic feature extraction methods have been studied for years, the complex shape of the target and the scattering center stack lead to random fluctuations of the RCS. These random fluctuations also exist certain periodicity, which has a great influence on the target recognition result. In order to solve the problem, this paper proposes a extraction method of micro-motion cycle feature based on confidence coefficient evaluation criteria.
Comparing the structure of an emerging market with a mature one under global perturbation
NASA Astrophysics Data System (ADS)
Namaki, A.; Jafari, G. R.; Raei, R.
2011-09-01
In this paper we investigate the Tehran stock exchange (TSE) and Dow Jones Industrial Average (DJIA) in terms of perturbed correlation matrices. To perturb a stock market, there are two methods, namely local and global perturbation. In the local method, we replace a correlation coefficient of the cross-correlation matrix with one calculated from two Gaussian-distributed time series, whereas in the global method, we reconstruct the correlation matrix after replacing the original return series with Gaussian-distributed time series. The local perturbation is just a technical study. We analyze these markets through two statistical approaches, random matrix theory (RMT) and the correlation coefficient distribution. By using RMT, we find that the largest eigenvalue is an influence that is common to all stocks and this eigenvalue has a peak during financial shocks. We find there are a few correlated stocks that make the essential robustness of the stock market but we see that by replacing these return time series with Gaussian-distributed time series, the mean values of correlation coefficients, the largest eigenvalues of the stock markets and the fraction of eigenvalues that deviate from the RMT prediction fall sharply in both markets. By comparing these two markets, we can see that the DJIA is more sensitive to global perturbations. These findings are crucial for risk management and portfolio selection.
NASA Astrophysics Data System (ADS)
Anderson, William; Meneveau, Charles
2010-05-01
A dynamic subgrid-scale (SGS) parameterization for hydrodynamic surface roughness is developed for large-eddy simulation (LES) of atmospheric boundary layer (ABL) flow over multiscale, fractal-like surfaces. The model consists of two parts. First, a baseline model represents surface roughness at horizontal length-scales that can be resolved in the LES. This model takes the form of a force using a prescribed drag coefficient. This approach is tested in LES of flow over cubes, wavy surfaces, and ellipsoidal roughness elements for which there are detailed experimental data available. Secondly, a dynamic roughness model is built, accounting for SGS surface details of finer resolution than the LES grid width. The SGS boundary condition is based on the logarithmic law of the wall, where the unresolved roughness of the surface is modeled as the product of local root-mean-square (RMS) of the unresolved surface height and an unknown dimensionless model coefficient. This coefficient is evaluated dynamically by comparing the plane-average hydrodynamic drag at two resolutions (grid- and test-filter scale, Germano et al., 1991). The new model is tested on surfaces generated through superposition of random-phase Fourier modes with prescribed, power-law surface-height spectra. The results show that the method yields convergent results and correct trends. Limitations and further challenges are highlighted. Supported by the US National Science Foundation (EAR-0609690).
Effect of Items Direction (Positive or Negative) on the Reliability in Likert Scale. Paper-11
ERIC Educational Resources Information Center
Gul, Showkeen Bilal Ahmad; Qasem, Mamun Ali Naji; Bhat, Mehraj Ahmad
2015-01-01
In this paper an attempt was made to analyze the effect of items direction (positive or negative) on the Alpha Cronbach reliability coefficient and the Split Half reliability coefficient in Likert scale. The descriptive survey research method was used for the study and sample of 510 undergraduate students were selected by used random sampling…
Ko, Mi-Hwa
2018-01-01
In this paper, we obtain the Hájek-Rényi inequality and, as an application, we study the strong law of large numbers for H -valued m -asymptotically almost negatively associated random vectors with mixing coefficients [Formula: see text] such that [Formula: see text].
A two-level stochastic collocation method for semilinear elliptic equations with random coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Luoping; Zheng, Bin; Lin, Guang
In this work, we propose a novel two-level discretization for solving semilinear elliptic equations with random coefficients. Motivated by the two-grid method for deterministic partial differential equations (PDEs) introduced by Xu, our two-level stochastic collocation method utilizes a two-grid finite element discretization in the physical space and a two-level collocation method in the random domain. In particular, we solve semilinear equations on a coarse meshmore » $$\\mathcal{T}_H$$ with a low level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_{P}$$) and solve linearized equations on a fine mesh $$\\mathcal{T}_h$$ using high level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_p$$). We prove that the approximated solution obtained from this method achieves the same order of accuracy as that from solving the original semilinear problem directly by stochastic collocation method with $$\\mathcal{T}_h$$ and $$\\mathcal{P}_p$$. The two-level method is computationally more efficient, especially for nonlinear problems with high random dimensions. Numerical experiments are also provided to verify the theoretical results.« less
NASA Technical Reports Server (NTRS)
Herbst, E.; Leung, C. M.
1986-01-01
In order to incorporate large ion-polar neutral rate coefficients into existing gas phase reaction networks, it is necessary to utilize simplified theoretical treatments because of the significant number of rate coefficients needed. The authors have used two simple theoretical treatments: the locked dipole approach of Moran and Hamill for linear polar neutrals and the trajectory scaling approach of Su and Chesnavich for nonlinear polar neutrals. The former approach is suitable for linear species because in the interstellar medium these are rotationally relaxed to a large extent and the incoming charged reactants can lock their dipoles into the lowest energy configuration. The latter approach is a better approximation for nonlinear neutral species, in which rotational relaxation is normally less severe and the incoming charged reactants are not as effective at locking the dipoles. The treatments are in reasonable agreement with more detailed long range theories and predict an inverse square root dependence on kinetic temperature for the rate coefficient. Compared with the locked dipole method, the trajectory scaling approach results in rate coefficients smaller by a factor of approximately 2.5.
NASA Astrophysics Data System (ADS)
Camporesi, Roberto
2016-01-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The approach presented here can be used in a first course on differential equations for science and engineering majors.
Lu, Tao; Wang, Min; Liu, Guangying; Dong, Guang-Hui; Qian, Feng
2016-01-01
It is well known that there is strong relationship between HIV viral load and CD4 cell counts in AIDS studies. However, the relationship between them changes during the course of treatment and may vary among individuals. During treatments, some individuals may experience terminal events such as death. Because the terminal event may be related to the individual's viral load measurements, the terminal mechanism is non-ignorable. Furthermore, there exists competing risks from multiple types of events, such as AIDS-related death and other death. Most joint models for the analysis of longitudinal-survival data developed in literatures have focused on constant coefficients and assume symmetric distribution for the endpoints, which does not meet the needs for investigating the nature of varying relationship between HIV viral load and CD4 cell counts in practice. We develop a mixed-effects varying-coefficient model with skewed distribution coupled with cause-specific varying-coefficient hazard model with random-effects to deal with varying relationship between the two endpoints for longitudinal-competing risks survival data. A fully Bayesian inference procedure is established to estimate parameters in the joint model. The proposed method is applied to a multicenter AIDS cohort study. Various scenarios-based potential models that account for partial data features are compared. Some interesting findings are presented.
Failure tolerance of spike phase synchronization in coupled neural networks
NASA Astrophysics Data System (ADS)
Jalili, Mahdi
2011-09-01
Neuronal synchronization plays an important role in the various functionality of nervous system such as binding, cognition, information processing, and computation. In this paper, we investigated how random and intentional failures in the nodes of a network influence its phase synchronization properties. We considered both artificially constructed networks using models such as preferential attachment, Watts-Strogatz, and Erdős-Rényi as well as a number of real neuronal networks. The failure strategy was either random or intentional based on properties of the nodes such as degree, clustering coefficient, betweenness centrality, and vulnerability. Hindmarsh-Rose model was considered as the mathematical model for the individual neurons, and the phase synchronization of the spike trains was monitored as a function of the percentage/number of removed nodes. The numerical simulations were supplemented by considering coupled non-identical Kuramoto oscillators. Failures based on the clustering coefficient, i.e., removing the nodes with high values of the clustering coefficient, had the least effect on the spike synchrony in all of the networks. This was followed by errors where the nodes were removed randomly. However, the behavior of the other three attack strategies was not uniform across the networks, and different strategies were the most influential in different network structure.
Shearlet-based measures of entropy and complexity for two-dimensional patterns
NASA Astrophysics Data System (ADS)
Brazhe, Alexey
2018-06-01
New spatial entropy and complexity measures for two-dimensional patterns are proposed. The approach is based on the notion of disequilibrium and is built on statistics of directional multiscale coefficients of the fast finite shearlet transform. Shannon entropy and Jensen-Shannon divergence measures are employed. Both local and global spatial complexity and entropy estimates can be obtained, thus allowing for spatial mapping of complexity in inhomogeneous patterns. The algorithm is validated in numerical experiments with a gradually decaying periodic pattern and Ising surfaces near critical state. It is concluded that the proposed algorithm can be instrumental in describing a wide range of two-dimensional imaging data, textures, or surfaces, where an understanding of the level of order or randomness is desired.
Sánchez-Hernández, Rosa M; Martínez-Tur, Vicente; González-Morales, M Gloria; Ramos, José; Peiró, José M
2009-08-01
This article examines links between disconfirmation of expectations and functional and relational service quality perceived by employees and customer satisfaction. A total of 156 employees, who were working in 52 work units, participated in the research study. In addition, 517 customers who were assisted by these work units were surveyed. Using a cross-level approach, we used a random coefficient model to test the aforementioned relationships. A strong relationship between disconfirmation of expectations and customer satisfaction was observed. Also, the results confirmed that functional service quality maintains an additional and significant association with customer satisfaction. In contrast, there were no significant relationships between relational service quality and customer satisfaction. The article concludes with a discussion of these results.
Box-Cox Mixed Logit Model for Travel Behavior Analysis
NASA Astrophysics Data System (ADS)
Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.
2010-09-01
To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.
Freak waves in random oceanic sea states.
Onorato, M; Osborne, A R; Serio, M; Bertone, S
2001-06-18
Freak waves are very large, rare events in a random ocean wave train. Here we study their generation in a random sea state characterized by the Joint North Sea Wave Project spectrum. We assume, to cubic order in nonlinearity, that the wave dynamics are governed by the nonlinear Schrödinger (NLS) equation. We show from extensive numerical simulations of the NLS equation how freak waves in a random sea state are more likely to occur for large values of the Phillips parameter alpha and the enhancement coefficient gamma. Comparison with linear simulations is also reported.
Higher-order clustering in networks
NASA Astrophysics Data System (ADS)
Yin, Hao; Benson, Austin R.; Leskovec, Jure
2018-05-01
A fundamental property of complex networks is the tendency for edges to cluster. The extent of the clustering is typically quantified by the clustering coefficient, which is the probability that a length-2 path is closed, i.e., induces a triangle in the network. However, higher-order cliques beyond triangles are crucial to understanding complex networks, and the clustering behavior with respect to such higher-order network structures is not well understood. Here we introduce higher-order clustering coefficients that measure the closure probability of higher-order network cliques and provide a more comprehensive view of how the edges of complex networks cluster. Our higher-order clustering coefficients are a natural generalization of the traditional clustering coefficient. We derive several properties about higher-order clustering coefficients and analyze them under common random graph models. Finally, we use higher-order clustering coefficients to gain new insights into the structure of real-world networks from several domains.
Capturing the Large Scale Behavior of Many Particle Systems Through Coarse-Graining
NASA Astrophysics Data System (ADS)
Punshon-Smith, Samuel
This dissertation is concerned with two areas of investigation: the first is understanding the mathematical structures behind the emergence of macroscopic laws and the effects of small scales fluctuations, the second involves the rigorous mathematical study of such laws and related questions of well-posedness. To address these areas of investigation the dissertation involves two parts: Part I concerns the theory of coarse-graining of many particle systems. We first investigate the mathematical structure behind the Mori-Zwanzig (projection operator) formalism by introducing two perturbative approaches to coarse-graining of systems that have an explicit scale separation. One concerns systems with little dissipation, while the other concerns systems with strong dissipation. In both settings we obtain an asymptotic series of `corrections' to the limiting description which are small with respect to the scaling parameter, these corrections represent the effects of small scales. We determine that only certain approximations give rise to dissipative effects in the resulting evolution. Next we apply this framework to the problem of coarse-graining the locally conserved quantities of a classical Hamiltonian system. By lumping conserved quantities into a collection of mesoscopic cells, we obtain, through a series of approximations, a stochastic particle system that resembles a discretization of the non-linear equations of fluctuating hydrodynamics. We study this system in the case that the transport coefficients are constant and prove well-posedness of the stochastic dynamics. Part II concerns the mathematical description of models where the underlying characteristics are stochastic. Such equations can model, for instance, the dynamics of a passive scalar in a random (turbulent) velocity field or the statistical behavior of a collection of particles subject to random environmental forces. First, we study general well-posedness properties of stochastic transport equation with rough diffusion coefficients. Our main result is strong existence and uniqueness under certain regularity conditions on the coefficients, and uses the theory of renormalized solutions of transport equations adapted to the stochastic setting. Next, in a work undertaken with collaborator Scott-Smith we study the Boltzmann equation with a stochastic forcing. The noise describing the forcing is white in time and colored in space and describes the effects of random environmental forces on a rarefied gas undergoing instantaneous, binary collisions. Under a cut-off assumption on the collision kernel and a coloring hypothesis for the noise coefficients, we prove the global existence of renormalized (DiPerna/Lions) martingale solutions to the Boltzmann equation for large initial data with finite mass, energy, and entropy. Our analysis includes a detailed study of weak martingale solutions to a class of linear stochastic kinetic equations. Tightness of the appropriate quantities is proved by an extension of the Skorohod theorem to non-metric spaces.
Prediction models for clustered data: comparison of a random intercept and standard regression model
2013-01-01
Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436
Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne
2013-02-15
When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.
An introduction to multidimensional measurement using Rasch models.
Briggs, Derek C; Wilson, Mark
2003-01-01
The act of constructing a measure requires a number of important assumptions. Principle among these assumptions is that the construct is unidimensional. In practice there are many instances when the assumption of unidimensionality does not hold, and where the application of a multidimensional measurement model is both technically appropriate and substantively advantageous. In this paper we illustrate the usefulness of a multidimensional approach to measurement with the Multidimensional Random Coefficient Multinomial Logit (MRCML) model, an extension of the unidimensional Rasch model. An empirical example is taken from a collection of embedded assessments administered to 541 students enrolled in middle school science classes with a hands-on science curriculum. Student achievement on these assessments are multidimensional in nature, but can also be treated as consecutive unidimensional estimates, or as is most common, as a composite unidimensional estimate. Structural parameters are estimated for each model using ConQuest, and model fit is compared. Student achievement in science is also compared across models. The multidimensional approach has the best fit to the data, and provides more reliable estimates of student achievement than under the consecutive unidimensional approach. Finally, at an interpretational level, the multidimensional approach may well provide richer information to the classroom teacher about the nature of student achievement.
Estimating individual benefits of medical or behavioral treatments in severely ill patients.
Diaz, Francisco J
2017-01-01
There is a need for statistical methods appropriate for the analysis of clinical trials from a personalized-medicine viewpoint as opposed to the common statistical practice that simply examines average treatment effects. This article proposes an approach to quantifying, reporting and analyzing individual benefits of medical or behavioral treatments to severely ill patients with chronic conditions, using data from clinical trials. The approach is a new development of a published framework for measuring the severity of a chronic disease and the benefits treatments provide to individuals, which utilizes regression models with random coefficients. Here, a patient is considered to be severely ill if the patient's basal severity is close to one. This allows the derivation of a very flexible family of probability distributions of individual benefits that depend on treatment duration and the covariates included in the regression model. Our approach may enrich the statistical analysis of clinical trials of severely ill patients because it allows investigating the probability distribution of individual benefits in the patient population and the variables that influence it, and we can also measure the benefits achieved in specific patients including new patients. We illustrate our approach using data from a clinical trial of the anti-depressant imipramine.
A new interpretation of the Keller-Segel model based on multiphase modelling.
Byrne, Helen M; Owen, Markus R
2004-12-01
In this paper an alternative derivation and interpretation are presented of the classical Keller-Segel model of cell migration due to random motion and chemotaxis. A multiphase modelling approach is used to describe how a population of cells moves through a fluid containing a diffusible chemical to which the cells are attracted. The cells and fluid are viewed as distinct components of a two-phase mixture. The principles of mass and momentum balance are applied to each phase, and appropriate constitutive laws imposed to close the resulting equations. A key assumption here is that the stress in the cell phase is influenced by the concentration of the diffusible chemical. By restricting attention to one-dimensional cartesian geometry we show how the model reduces to a pair of nonlinear coupled partial differential equations for the cell density and the chemical concentration. These equations may be written in the form of the Patlak-Keller-Segel model, naturally including density-dependent nonlinearities in the cell motility coefficients. There is a direct relationship between the random motility and chemotaxis coefficients, both depending in an inter-related manner on the chemical concentration. We suggest that this may explain why many chemicals appear to stimulate both chemotactic and chemokinetic responses in cell populations. After specialising our model to describe slime mold we then show how the functional form of the chemical potential that drives cell locomotion influences the ability of the system to generate spatial patterns. The paper concludes with a summary of the key results and a discussion of avenues for future research.
Automatic Classification of Aerial Imagery for Urban Hydrological Applications
NASA Astrophysics Data System (ADS)
Paul, A.; Yang, C.; Breitkopf, U.; Liu, Y.; Wang, Z.; Rottensteiner, F.; Wallner, M.; Verworn, A.; Heipke, C.
2018-04-01
In this paper we investigate the potential of automatic supervised classification for urban hydrological applications. In particular, we contribute to runoff simulations using hydrodynamic urban drainage models. In order to assess whether the capacity of the sewers is sufficient to avoid surcharge within certain return periods, precipitation is transformed into runoff. The transformation of precipitation into runoff requires knowledge about the proportion of drainage-effective areas and their spatial distribution in the catchment area. Common simulation methods use the coefficient of imperviousness as an important parameter to estimate the overland flow, which subsequently contributes to the pipe flow. The coefficient of imperviousness is the percentage of area covered by impervious surfaces such as roofs or road surfaces. It is still common practice to assign the coefficient of imperviousness for each particular land parcel manually by visual interpretation of aerial images. Based on classification results of these imagery we contribute to an objective automatic determination of the coefficient of imperviousness. In this context we compare two classification techniques: Random Forests (RF) and Conditional Random Fields (CRF). Experimental results performed on an urban test area show good results and confirm that the automated derivation of the coefficient of imperviousness, apart from being more objective and, thus, reproducible, delivers more accurate results than the interactive estimation. We achieve an overall accuracy of about 85 % for both classifiers. The root mean square error of the differences of the coefficient of imperviousness compared to the reference is 4.4 % for the CRF-based classification, and 3.8 % for the RF-based classification.
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1992-01-01
Research conducted during the period from July 1991 through December 1992 is covered. A method based upon the quasi-analytical approach was developed for computing the aerodynamic sensitivity coefficients of three dimensional wings in transonic and subsonic flow. In addition, the method computes for comparison purposes the aerodynamic sensitivity coefficients using the finite difference approach. The accuracy and validity of the methods are currently under investigation.
Penalized spline estimation for functional coefficient regression models.
Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan
2010-04-01
The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.
Measuring monotony in two-dimensional samples
NASA Astrophysics Data System (ADS)
Kachapova, Farida; Kachapov, Ilias
2010-04-01
This note introduces a monotony coefficient as a new measure of the monotone dependence in a two-dimensional sample. Some properties of this measure are derived. In particular, it is shown that the absolute value of the monotony coefficient for a two-dimensional sample is between |r| and 1, where r is the Pearson's correlation coefficient for the sample; that the monotony coefficient equals 1 for any monotone increasing sample and equals -1 for any monotone decreasing sample. This article contains a few examples demonstrating that the monotony coefficient is a more accurate measure of the degree of monotone dependence for a non-linear relationship than the Pearson's, Spearman's and Kendall's correlation coefficients. The monotony coefficient is a tool that can be applied to samples in order to find dependencies between random variables; it is especially useful in finding couples of dependent variables in a big dataset of many variables. Undergraduate students in mathematics and science would benefit from learning and applying this measure of monotone dependence.
Firm-Related Training Tracks: A Random Effects Ordered Probit Model
ERIC Educational Resources Information Center
Groot, Wim; van den Brink, Henriette Maassen
2003-01-01
A random effects ordered response model of training is estimated to analyze the existence of training tracks and time varying coefficients in training frequency. Two waves of a Dutch panel survey of workers are used covering the period 1992-1996. The amount of training received by workers increased during the period 1994-1996 compared to…
Asymptotic Effect of Misspecification in the Random Part of the Multilevel Model
ERIC Educational Resources Information Center
Berkhof, Johannes; Kampen, Jarl Kennard
2004-01-01
The authors examine the asymptotic effect of omitting a random coefficient in the multilevel model and derive expressions for the change in (a) the variance components estimator and (b) the estimated variance of the fixed effects estimator. They apply the method of moments, which yields a closed form expression for the omission effect. In…
D'Agostino, M F; Sanz, J; Martínez-Castro, I; Giuffrè, A M; Sicari, V; Soria, A C
2014-07-01
Statistical analysis has been used for the first time to evaluate the dispersion of quantitative data in the solid-phase microextraction (SPME) followed by gas chromatography-mass spectrometry (GC-MS) analysis of blackberry (Rubus ulmifolius Schott) volatiles with the aim of improving their precision. Experimental and randomly simulated data were compared using different statistical parameters (correlation coefficients, Principal Component Analysis loadings and eigenvalues). Non-random factors were shown to significantly contribute to total dispersion; groups of volatile compounds could be associated with these factors. A significant improvement of precision was achieved when considering percent concentration ratios, rather than percent values, among those blackberry volatiles with a similar dispersion behavior. As novelty over previous references, and to complement this main objective, the presence of non-random dispersion trends in data from simple blackberry model systems was evidenced. Although the influence of the type of matrix on data precision was proved, the possibility of a better understanding of the dispersion patterns in real samples was not possible from model systems. The approach here used was validated for the first time through the multicomponent characterization of Italian blackberries from different harvest years. Copyright © 2014 Elsevier B.V. All rights reserved.
Microwave scattering and emission from a half-space anisotropic random medium
NASA Astrophysics Data System (ADS)
Mudaliar, Saba; Lee, Jay Kyoon
1990-12-01
This paper is a sequel to an earlier paper (Lee and Mudaliar, 1988) where the backscattering coefficients of a half-space anisotropic random medium were obtained. Here the bistatic scattering coefficients are calculated by solving the modified radiative transfer equations under a first-order approximation. The effects of multiple scattering on the results are observed. Emissivities are calculated and compared with those obtained using the Born approximation (single scattering). Several interesting properties of the model are brought to notice using numerical examples. Finally, as an application, the theory is used to interpret the passive remote sensing data of multiyear sea ice in the microwave frequency range. A quite close agreement between theoretical prediction and the measured data is found.
Giant mesoscopic fluctuations of the elastic cotunneling thermopower of a single-electron transistor
NASA Astrophysics Data System (ADS)
Vasenko, A. S.; Basko, D. M.; Hekking, F. W. J.
2015-02-01
We study the thermoelectric transport of a small metallic island weakly coupled to two electrodes by tunnel junctions. In the Coulomb blockade regime, in the case when the ground state of the system corresponds to an even number of electrons on the island, the main mechanism of electron transport at the lowest temperatures is elastic cotunneling. In this regime, the transport coefficients strongly depend on the realization of the random impurity potential or the shape of the island. Using random-matrix theory, we calculate the thermopower and the thermoelectric kinetic coefficient and study the statistics of their mesoscopic fluctuations in the elastic cotunneling regime. The fluctuations of the thermopower turn out to be much larger than the average value.
Time domain simulation of the response of geometrically nonlinear panels subjected to random loading
NASA Technical Reports Server (NTRS)
Moyer, E. Thomas, Jr.
1988-01-01
The response of composite panels subjected to random pressure loads large enough to cause geometrically nonlinear responses is studied. A time domain simulation is employed to solve the equations of motion. An adaptive time stepping algorithm is employed to minimize intermittent transients. A modified algorithm for the prediction of response spectral density is presented which predicts smooth spectral peaks for discrete time histories. Results are presented for a number of input pressure levels and damping coefficients. Response distributions are calculated and compared with the analytical solution of the Fokker-Planck equations. RMS response is reported as a function of input pressure level and damping coefficient. Spectral densities are calculated for a number of examples.
On S.N. Bernstein's derivation of Mendel's Law and 'rediscovery' of the Hardy-Weinberg distribution.
Stark, Alan; Seneta, Eugene
2012-04-01
Around 1923 the soon-to-be famous Soviet mathematician and probabilist Sergei N. Bernstein started to construct an axiomatic foundation of a theory of heredity. He began from the premise of stationarity (constancy of type proportions) from the first generation of offspring. This led him to derive the Mendelian coefficients of heredity. It appears that he had no direct influence on the subsequent development of population genetics. A basic assumption of Bernstein was that parents coupled randomly to produce offspring. This paper shows that a simple model of non-random mating, which nevertheless embodies a feature of the Hardy-Weinberg Law, can produce Mendelian coefficients of heredity while maintaining the population distribution. How W. Johannsen's monograph influenced Bernstein is discussed.
NASA Astrophysics Data System (ADS)
Paço, Teresa A.; Pôças, Isabel; Cunha, Mário; Silvestre, José C.; Santos, Francisco L.; Paredes, Paula; Pereira, Luís S.
2014-11-01
The estimation of crop evapotranspiration (ETc) from the reference evapotranspiration (ETo) and a standard crop coefficient (Kc) in olive orchards requires that the latter be adjusted to planting density and height. The use of the dual Kc approach may be the best solution because the basal crop coefficient Kcb represents plant transpiration and the evaporation coefficient reproduces the soil coverage conditions and the frequency of wettings. To support related computations for a super intensive olive orchard, the model SIMDualKc was adopted because it uses the dual Kc approach. Alternatively, to consider the physical characteristics of the vegetation, the satellite-based surface energy balance model METRIC™ - Mapping EvapoTranspiration at high Resolution using Internalized Calibration - was used to estimate ETc and to derive crop coefficients. Both approaches were compared in this study. SIMDualKc model was calibrated and validated using sap-flow measurements of the transpiration for 2011 and 2012. In addition, eddy covariance estimation of ETc was also used. In the current study, METRIC™ was applied to Landsat images from 2011 to 2012. Adaptations for incomplete cover woody crops were required to parameterize METRIC. It was observed that ETc obtained from both approaches was similar and that crop coefficients derived from both models showed similar patterns throughout the year. Although the two models use distinct approaches, their results are comparable and they are complementary in spatial and temporal scales.
Raevsky, O A; Grigor'ev, V J; Raevskaja, O E; Schaper, K-J
2006-06-01
QSPR analyses of a data set containing experimental partition coefficients in the three systems octanol-water, water-gas, and octanol-gas for 98 chemicals have shown that it is possible to calculate any partition coefficient in the system 'gas phase/octanol/water' by three different approaches: (1) from experimental partition coefficients obtained in the corresponding two other subsystems. However, in many cases these data may not be available. Therefore, a solution may be approached (2), a traditional QSPR analysis based on e.g. HYBOT descriptors (hydrogen bond acceptor and donor factors, SigmaCa and SigmaCd, together with polarisability alpha, a steric bulk effect descriptor) and supplemented with substructural indicator variables. (3) A very promising approach which is a combination of the similarity concept and QSPR based on HYBOT descriptors. In this approach observed partition coefficients of structurally nearest neighbours of a compound-of-interest are used. In addition, contributions arising from differences in alpha, SigmaCa, and SigmaCd values between the compound-of-interest and its nearest neighbour(s), respectively, are considered. In this investigation highly significant relationships were obtained by approaches (1) and (3) for the octanol/gas phase partition coefficient (log Log).
Covariate-free and Covariate-dependent Reliability.
Bentler, Peter M
2016-12-01
Classical test theory reliability coefficients are said to be population specific. Reliability generalization, a meta-analysis method, is the main procedure for evaluating the stability of reliability coefficients across populations. A new approach is developed to evaluate the degree of invariance of reliability coefficients to population characteristics. Factor or common variance of a reliability measure is partitioned into parts that are, and are not, influenced by control variables, resulting in a partition of reliability into a covariate-dependent and a covariate-free part. The approach can be implemented in a single sample and can be applied to a variety of reliability coefficients.
Standards for Standardized Logistic Regression Coefficients
ERIC Educational Resources Information Center
Menard, Scott
2011-01-01
Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…
End-point detection in potentiometric titration by continuous wavelet transform.
Jakubowska, Małgorzata; Baś, Bogusław; Kubiak, Władysław W
2009-10-15
The aim of this work was construction of the new wavelet function and verification that a continuous wavelet transform with a specially defined dedicated mother wavelet is a useful tool for precise detection of end-point in a potentiometric titration. The proposed algorithm does not require any initial information about the nature or the type of analyte and/or the shape of the titration curve. The signal imperfection, as well as random noise or spikes has no influence on the operation of the procedure. The optimization of the new algorithm was done using simulated curves and next experimental data were considered. In the case of well-shaped and noise-free titration data, the proposed method gives the same accuracy and precision as commonly used algorithms. But, in the case of noisy or badly shaped curves, the presented approach works good (relative error mainly below 2% and coefficients of variability below 5%) while traditional procedures fail. Therefore, the proposed algorithm may be useful in interpretation of the experimental data and also in automation of the typical titration analysis, specially in the case when random noise interfere with analytical signal.
Electromagnetic mixing laws: A supersymmetric approach
NASA Astrophysics Data System (ADS)
Niez, J. J.
2010-02-01
In this article we address the old problem of finding the effective dielectric constant of materials described either by a local random dielectric constant, or by a set of non-overlapping spherical inclusions randomly dispersed in a host. We use a unified theoretical framework, such that all the most important Electromagnetic Mixing Laws (EML) can be recovered as the first iterative step of a family of results, thus opening the way to future improvements through the refinements of the approximation schemes. When the material is described by a set of immersed inclusions characterized by their spatial correlation functions, we exhibit an EML which, being featured by a minimal approximation scheme, does not come from the multiple scattering paradigm. It is made of a pure Hori-Yonezawa formula, corrected by a power series of the inclusion density. The coefficients of the latter, which are given as sums of standard diagrams, are recast into electromagnetic quantities which calculation is amenable numerically thanks to codes available on the web. The methods used and developed in this work are generic and can be used in a large variety of areas ranging from mechanics to thermodynamics.
Chang, Song; Jia, Liangding; Takeuchi, Riki; Cai, Yahua
2014-07-01
In this article, some information about the data used in the article and a citation were not included. The details of the corrections are provided.] This study uses 3-level, 2-wave time-lagged data from a random sample of 55 high-technology firms, 238 teams, and 1,059 individuals in China to investigate a multilevel combinational model of employee creativity. First, we hypothesize that firm (macrolevel) high-commitment work systems are conducive to individual (microlevel) creativity. Furthermore, we hypothesize that this positive crosslevel main impact may be combined with middle-level (mesolevel) factors, including team cohesion and team task complexity, such that the positive impact of firm high-commitment work systems on individual creativity is stronger when team cohesion is high and the team task more complex. The findings from random coefficient modeling analyses provide support for our hypotheses. These sets of results offer novel insight into how firms can use macrolevel and mesolevel contextual variables in a systematic manner to promote employee creativity in the workplace, despite its complex nature.
Baird, Mark E
2003-10-01
The size, shape, and absorption coefficient of a microalgal cell determines, to a first order approximation, the rate at which light is absorbed by the cell. The rate of absorption determines the maximum amount of energy available for photosynthesis, and can be used to calculate the attenuation of light through the water column, including the effect of packaging pigments within discrete particles. In this paper, numerical approximations are made of the mean absorption cross-section of randomly oriented cells, aA. The shapes investigated are spheroids, rectangular prisms with a square base, cylinders, cones and double cones with aspect ratios of 0.25, 0.5, 1, 2, and 4. The results of the numerical simulations are fitted to a modified sigmoid curve, and take advantage of three analytical solutions. The results are presented in a non-dimensionalised format and are independent of size. A simple approximation using a rectangular hyperbolic curve is also given, and an approach for obtaining the upper and lower bounds of aA for more complex shapes is outlined.
El-Bassel, Nabila; Jemmott, John B; Bellamy, Scarlett L; Pequegnat, Willo; Wingood, Gina M; Wyatt, Gail E; Landis, J Richard; Remien, Robert H
2016-06-01
Targeting couples is a promising behavioral HIV risk-reduction strategy, but the mechanisms underlying the effects of such interventions are unknown. We report secondary analyses testing whether Social-Cognitive-Theory variables mediated the Eban HIV-risk-reduction intervention's effects on condom-use outcomes. In a multisite randomized controlled trial conducted in four US cities, 535 African American HIV-serodiscordant couples were randomized to the Eban HIV risk-reduction intervention or attention-matched control intervention. Outcomes were proportion condom-protected sex, consistent condom use, and frequency of unprotected sex measured pre-, immediately post-, and 6 and 12 months post-intervention. Potential mediators included Social-Cognitive-Theory variables: outcome expectancies and self-efficacy. Mediation analyses using the product-of-coefficients approach in a generalized-estimating-equations framework revealed that condom-use outcome expectancy, partner-reaction outcome expectancy, intention, self-efficacy, and safer-sex communication improved post-intervention and mediated intervention-induced improvements in condom-use outcomes. These findings underscore the importance of targeting outcome expectancies, self-efficacy, and safer-sex communication in couples-level HIV risk-reduction interventions.
Mixed models, linear dependency, and identification in age-period-cohort models.
O'Brien, Robert M
2017-07-20
This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Random walks on cubic lattices with bond disorder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ernst, M.H.; van Velthoven, P.F.J.
1986-12-01
The authors consider diffusive systems with static disorder, such as Lorentz gases, lattice percolation, ants in a labyrinth, termite problems, random resistor networks, etc. In the case of diluted randomness the authors can apply the methods of kinetic theory to obtain systematic expansions of dc and ac transport properties in powers of the impurity concentration c. The method is applied to a hopping model on a d-dimensional cubic lattice having two types of bonds with conductivity sigma and sigma/sub 0/ = 1, with concentrations c and 1-c, respectively. For the square lattice the authors explicitly calculate the diffusion coefficient D(c,sigma)more » as a function of c, to O(c/sup 2/) terms included for different ratios of the bond conductivity sigma. The probability of return at long times is given by P/sub 0/(t) approx. (4..pi..D(c,sigma)t)/sup -d/2/, which is determined by the diffusion coefficient of the disordered system.« less
Müller-Unterberg, Maarit; Wallmann, Sandra; Distl, Ottmar
2017-10-18
The Black Forest Draught horse (BFDH) is an endangered German coldblood breed with its origin in the area of the Black Forest in South Germany. In this retrospective study, the influence of the inbreeding coefficient on foaling rates was investigated using records from ten breeding seasons. Due to the small population size of BFDH, the level of inbreeding is increasing and may have an effect on foaling rates.The data of the present study included all coverings reported for 1024 BFDH mares in the years 2001-2009. These mares were covered by 32 BFDH stallions from the State Stud Marbach. Data from 4534 estrus cycles was used to calculate per cycle foaling rate (CFR). Pedigree data contained all studbook data up to the foundation of the breed as early as 1836. The level of inbreeding of the mare, stallion and expected foal along with other systematic effects on CFR were analysed using a generalized linear mixed model approach. Stallion was employed as a random effect. Systematic fixed effects were month of mating, mating type, age of the mare and stallion, reproductive status of the mare and stallion line of the mare. Inbreeding coefficients of the stallion, mare and expected foal were modelled as linear covariates. The average CFR was 40.9%. The mean inbreeding coefficients of the mares, stallions and expected foals were 7.46, 7.70 and 9.66%. Mating type, age of the mare, reproductive status of the mare and stallion line of the mare had a significant effect. The results showed that the mating type, stallion line of the mare, sire, age and reproductive status of the mare exerted the largest influences on CFR in BFDH. Inbreeding coefficients of the stallion, mare and expected foal were not significantly related with CFR.
Ford, R M; Lauffenburger, D A
1992-05-01
An individual cell-based mathematical model of Rivero et al. provides a framework for determining values of the chemotactic sensitivity coefficient chi 0, an intrinsic cell population parameter that characterizes the chemotactic response of bacterial populations. This coefficient can theoretically relate the swimming behavior of individual cells to the resulting migration of a bacterial population. When this model is applied to the commonly used capillary assay, an approximate solution can be obtained for a particular range of chemotactic strengths yielding a very simple analytical expression for estimating the value of chi 0, [formula: see text] from measurements of cell accumulation in the capillary, N, when attractant uptake is negligible. A0 and A infinity are the dimensionless attractant concentrations initially present at the mouth of the capillary and far into the capillary, respectively, which are scaled by Kd, the effective dissociation constant for receptor-attractant binding. D is the attractant diffusivity, and mu is the cell random motility coefficient. NRM is the cell accumulation in the capillary in the absence of an attractant gradient, from which mu can be determined independently as mu = (pi/4t)(NRM/pi r2bc)2, with r the capillary tube radius and bc the bacterial density initially in the chamber. When attractant uptake is significant, a slightly more involved procedure requiring a simple numerical integration becomes necessary. As an example, we apply this approach to quantitatively characterize, in terms of the chemotactic sensitivity coefficient chi 0, data from Terracciano indicating enhanced chemotactic responses of Escherichia coli to galactose when cultured under growth-limiting galactose levels in a chemostat.
Statistical signatures of a targeted search by bacteria
NASA Astrophysics Data System (ADS)
Jashnsaz, Hossein; Anderson, Gregory G.; Pressé, Steve
2017-12-01
Chemoattractant gradients are rarely well-controlled in nature and recent attention has turned to bacterial chemotaxis toward typical bacterial food sources such as food patches or even bacterial prey. In environments with localized food sources reminiscent of a bacterium’s natural habitat, striking phenomena—such as the volcano effect or banding—have been predicted or expected to emerge from chemotactic models. However, in practice, from limited bacterial trajectory data it is difficult to distinguish targeted searches from an untargeted search strategy for food sources. Here we use a theoretical model to identify statistical signatures of a targeted search toward point food sources, such as prey. Our model is constructed on the basis that bacteria use temporal comparisons to bias their random walk, exhibit finite memory and are subject to random (Brownian) motion as well as signaling noise. The advantage with using a stochastic model-based approach is that a stochastic model may be parametrized from individual stochastic bacterial trajectories but may then be used to generate a very large number of simulated trajectories to explore average behaviors obtained from stochastic search strategies. For example, our model predicts that a bacterium’s diffusion coefficient increases as it approaches the point source and that, in the presence of multiple sources, bacteria may take substantially longer to locate their first source giving the impression of an untargeted search strategy.
Using Bioinformatic Approaches to Identify Pathways Targeted by Human Leukemogens
Thomas, Reuben; Phuong, Jimmy; McHale, Cliona M.; Zhang, Luoping
2012-01-01
We have applied bioinformatic approaches to identify pathways common to chemical leukemogens and to determine whether leukemogens could be distinguished from non-leukemogenic carcinogens. From all known and probable carcinogens classified by IARC and NTP, we identified 35 carcinogens that were associated with leukemia risk in human studies and 16 non-leukemogenic carcinogens. Using data on gene/protein targets available in the Comparative Toxicogenomics Database (CTD) for 29 of the leukemogens and 11 of the non-leukemogenic carcinogens, we analyzed for enrichment of all 250 human biochemical pathways in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. The top pathways targeted by the leukemogens included metabolism of xenobiotics by cytochrome P450, glutathione metabolism, neurotrophin signaling pathway, apoptosis, MAPK signaling, Toll-like receptor signaling and various cancer pathways. The 29 leukemogens formed 18 distinct clusters comprising 1 to 3 chemicals that did not correlate with known mechanism of action or with structural similarity as determined by 2D Tanimoto coefficients in the PubChem database. Unsupervised clustering and one-class support vector machines, based on the pathway data, were unable to distinguish the 29 leukemogens from 11 non-leukemogenic known and probable IARC carcinogens. However, using two-class random forests to estimate leukemogen and non-leukemogen patterns, we estimated a 76% chance of distinguishing a random leukemogen/non-leukemogen pair from each other. PMID:22851955
Gas Chromatography Data Classification Based on Complex Coefficients of an Autoregressive Model
Zhao, Weixiang; Morgan, Joshua T.; Davis, Cristina E.
2008-01-01
This paper introduces autoregressive (AR) modeling as a novel method to classify outputs from gas chromatography (GC). The inverse Fourier transformation was applied to the original sensor data, and then an AR model was applied to transform data to generate AR model complex coefficients. This series of coefficients effectively contains a compressed version of all of the information in the original GC signal output. We applied this method to chromatograms resulting from proliferating bacteria species grown in culture. Three types of neural networks were used to classify the AR coefficients: backward propagating neural network (BPNN), radial basis function-principal component analysismore » (RBF-PCA) approach, and radial basis function-partial least squares regression (RBF-PLSR) approach. This exploratory study demonstrates the feasibility of using complex root coefficient patterns to distinguish various classes of experimental data, such as those from the different bacteria species. This cognition approach also proved to be robust and potentially useful for freeing us from time alignment of GC signals.« less
Forster, Jeri E.; MaWhinney, Samantha; Ball, Erika L.; Fairclough, Diane
2011-01-01
Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223
Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects.
Dai, Tianjiao; Shete, Sanjay
2016-08-30
In a standard two-stage SMART design, the intermediate response to the first-stage intervention is measured at a fixed time point for all participants. Subsequently, responders and non-responders are re-randomized and the final outcome of interest is measured at the end of the study. To reduce the side effects and costs associated with first-stage interventions in a SMART design, we proposed a novel time-varying SMART design in which individuals are re-randomized to the second-stage interventions as soon as a pre-fixed intermediate response is observed. With this strategy, the duration of the first-stage intervention will vary. We developed a time-varying mixed effects model and a joint model that allows for modeling the outcomes of interest (intermediate and final) and the random durations of the first-stage interventions simultaneously. The joint model borrows strength from the survival sub-model in which the duration of the first-stage intervention (i.e., time to response to the first-stage intervention) is modeled. We performed a simulation study to evaluate the statistical properties of these models. Our simulation results showed that the two modeling approaches were both able to provide good estimations of the means of the final outcomes of all the embedded interventions in a SMART. However, the joint modeling approach was more accurate for estimating the coefficients of first-stage interventions and time of the intervention. We conclude that the joint modeling approach provides more accurate parameter estimates and a higher estimated coverage probability than the single time-varying mixed effects model, and we recommend the joint model for analyzing data generated from time-varying SMART designs. In addition, we showed that the proposed time-varying SMART design is cost-efficient and equally effective in selecting the optimal embedded adaptive intervention as the standard SMART design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less
Effective Stochastic Model for Reactive Transport
NASA Astrophysics Data System (ADS)
Tartakovsky, A. M.; Zheng, B.; Barajas-Solano, D. A.
2017-12-01
We propose an effective stochastic advection-diffusion-reaction (SADR) model. Unlike traditional advection-dispersion-reaction models, the SADR model describes mechanical and diffusive mixing as two separate processes. In the SADR model, the mechanical mixing is driven by random advective velocity with the variance given by the coefficient of mechanical dispersion. The diffusive mixing is modeled as a fickian diffusion with the effective diffusion coefficient. Both coefficients are given in terms of Peclet number (Pe) and the coefficient of molecular diffusion. We use the experimental results of to demonstrate that for transport and bimolecular reactions in porous media the SADR model is significantly more accurate than the traditional dispersion model, which overestimates the mass of the reaction product by as much as 25%.
A Riemann-Hilbert approach to asymptotic questions for orthogonal polynomials
NASA Astrophysics Data System (ADS)
Deift, P.; Kriecherbauer, T.; McLaughlin, K. T.-R.; Venakides, S.; Zhou, X.
2001-08-01
A few years ago the authors introduced a new approach to study asymptotic questions for orthogonal polynomials. In this paper we give an overview of our method and review the results which have been obtained in Deift et al. (Internat. Math. Res. Notices (1997) 759, Comm. Pure Appl. Math. 52 (1999) 1491, 1335), Deift (Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach, Courant Lecture Notes, Vol. 3, New York University, 1999), Kriecherbauer and McLaughlin (Internat. Math. Res. Notices (1999) 299) and Baik et al. (J. Amer. Math. Soc. 12 (1999) 1119). We mainly consider orthogonal polynomials with respect to weights on the real line which are either (1) Freud-type weights d[alpha](x)=e-Q(x) dx (Q polynomial or Q(x)=x[beta], [beta]>0), or (2) varying weights d[alpha]n(x)=e-nV(x) dx (V analytic, limx-->[infinity] V(x)/logx=[infinity]). We obtain Plancherel-Rotach-type asymptotics in the entire complex plane as well as asymptotic formulae with error estimates for the leading coefficients, for the recurrence coefficients, and for the zeros of the orthogonal polynomials. Our proof starts from an observation of Fokas et al. (Comm. Math. Phys. 142 (1991) 313) that the orthogonal polynomials can be determined as solutions of certain matrix valued Riemann-Hilbert problems. We analyze the Riemann-Hilbert problems by a steepest descent type method introduced by Deift and Zhou (Ann. Math. 137 (1993) 295) and further developed in Deift and Zhou (Comm. Pure Appl. Math. 48 (1995) 277) and Deift et al. (Proc. Nat. Acad. Sci. USA 95 (1998) 450). A crucial step in our analysis is the use of the well-known equilibrium measure which describes the asymptotic distribution of the zeros of the orthogonal polynomials.
Willingness-to-pay for schistosomiasis-related health outcomes in Kenya.
Kirigia, J M; Sambo, L G; Kainyu, L H
2000-01-01
Cost-benefit analysis (CBA) provides a framework for identifying, quantifying, and valuing in monetary terms all the important costs and consequences to society of competing disease interventions. Thus, CBA requires that impacts of schistosomiasis interventions on beneficiaries'health be valued in monetary terms Economic theory requires the use of the willingness to pay (WTP) approach in valuation of changes in health as a result of intervention. It is the only approach which is consistent with the potential Pareto improvement principle, and hence, consistent with CBA. The present study developed a health outcome measure and tested its operational feasibility. Contingent valuation for certain return to normal health from various health states, and for remaining in one's current health state were elicited through direct interview of randomly selected rice farmers, teachers, and health personnel in Kenya. The WTP to avoid risk of advancing to the next more severe state, seemed to be higher than WTP for a return to normal health. Generally, there was a significant difference between the average WTP values of farmers, teachers and health personnel populations. The gender and occupation variable coefficients were positive and highly significant in all regressions. The coefficients of the other explanatory variables were generally not statistically significant, indicating that medical expenses, anxiety cost, loss of earnings, and loss of work time, implied in various health states descriptions did not have significant effect on respondents expressed WTP values. The latter finding shows that there is need for more research to identify the other (besides gender and occupation) determinants of expressed WTP values in Africa. This study has demonstrated that it is possible to elicit coherent WTP values from economically under-developed countries. Further empirical work is clearly needed to at least address the validity and reliability of the contingent valuation approach and its measurements in Africa.
Analysis of a Split-Plot Experimental Design Applied to a Low-Speed Wind Tunnel Investigation
NASA Technical Reports Server (NTRS)
Erickson, Gary E.
2013-01-01
A procedure to analyze a split-plot experimental design featuring two input factors, two levels of randomization, and two error structures in a low-speed wind tunnel investigation of a small-scale model of a fighter airplane configuration is described in this report. Standard commercially-available statistical software was used to analyze the test results obtained in a randomization-restricted environment often encountered in wind tunnel testing. The input factors were differential horizontal stabilizer incidence and the angle of attack. The response variables were the aerodynamic coefficients of lift, drag, and pitching moment. Using split-plot terminology, the whole plot, or difficult-to-change, factor was the differential horizontal stabilizer incidence, and the subplot, or easy-to-change, factor was the angle of attack. The whole plot and subplot factors were both tested at three levels. Degrees of freedom for the whole plot error were provided by replication in the form of three blocks, or replicates, which were intended to simulate three consecutive days of wind tunnel facility operation. The analysis was conducted in three stages, which yielded the estimated mean squares, multiple regression function coefficients, and corresponding tests of significance for all individual terms at the whole plot and subplot levels for the three aerodynamic response variables. The estimated regression functions included main effects and two-factor interaction for the lift coefficient, main effects, two-factor interaction, and quadratic effects for the drag coefficient, and only main effects for the pitching moment coefficient.
Monte Carlo calibration of avalanches described as Coulomb fluid flows.
Ancey, Christophe
2005-07-15
The idea that snow avalanches might behave as granular flows, and thus be described as Coulomb fluid flows, came up very early in the scientific study of avalanches, but it is not until recently that field evidence has been provided that demonstrates the reliability of this idea. This paper aims to specify the bulk frictional behaviour of snow avalanches by seeking a universal friction law. Since the bulk friction coefficient cannot be measured directly in the field, the friction coefficient must be calibrated by adjusting the model outputs to closely match the recorded data. Field data are readily available but are of poor quality and accuracy. We used Bayesian inference techniques to specify the model uncertainty relative to data uncertainty and to robustly and efficiently solve the inverse problem. A sample of 173 events taken from seven paths in the French Alps was used. The first analysis showed that the friction coefficient behaved as a random variable with a smooth and bell-shaped empirical distribution function. Evidence was provided that the friction coefficient varied with the avalanche volume, but any attempt to adjust a one-to-one relationship relating friction to volume produced residual errors that could be as large as three times the maximum uncertainty of field data. A tentative universal friction law is proposed: the friction coefficient is a random variable, the distribution of which can be approximated by a normal distribution with a volume-dependent mean.
Stine, O C; Smith, K D
1990-01-01
The effects of mutation, migration, random drift, and selection on the change in frequency of the alleles associated with Huntington disease, porphyria variegata, and lipoid proteinosis have been assessed in the Afrikaner population of South Africa. Although admixture cannot be completely discounted, it was possible to exclude migration and new mutation as major sources of changes in the frequency of these alleles by limiting analyses to pedigrees descendant from founding families. Calculations which overestimated the possible effect of random drift demonstrated that drift did not account for the observed changes in gene frequencies. Therefore these changes must have been caused by natural selection, and a coefficient of selection was estimated for each trait. For the rare, dominant, deleterious allele associated with Huntington disease, the coefficient of selection was estimated to be .34, indicating that this allele has a selective disadvantage, contrary to some recent studies. For the presumed dominant and probably deleterious allele associated with porphyria variegata, the coefficient of selection lies between .07 and .02. The coefficient of selection for the rare, clinically recessive allele associated with lipoid proteinosis was estimated to be .07. Calculations based on a model system indicate that the observed decrease in allele frequency cannot be explained solely on the basis of selection against the homozygote. Thus, this may be an example of a pleiotropic gene which has a dominant effect in terms of selection even though its known clinical effect is recessive. PMID:2137963
Stine, O C; Smith, K D
1990-03-01
The effects of mutation, migration, random drift, and selection on the change in frequency of the alleles associated with Huntington disease, porphyria variegata, and lipoid proteinosis have been assessed in the Afrikaner population of South Africa. Although admixture cannot be completely discounted, it was possible to exclude migration and new mutation as major sources of changes in the frequency of these alleles by limiting analyses to pedigrees descendant from founding families. Calculations which overestimated the possible effect of random drift demonstrated that drift did not account for the observed changes in gene frequencies. Therefore these changes must have been caused by natural selection, and a coefficient of selection was estimated for each trait. For the rare, dominant, deleterious allele associated with Huntington disease, the coefficient of selection was estimated to be .34, indicating that this allele has a selective disadvantage, contrary to some recent studies. For the presumed dominant and probably deleterious allele associated with porphyria variegata, the coefficient of selection lies between .07 and .02. The coefficient of selection for the rare, clinically recessive allele associated with lipoid proteinosis was estimated to be .07. Calculations based on a model system indicate that the observed decrease in allele frequency cannot be explained solely on the basis of selection against the homozygote. Thus, this may be an example of a pleiotropic gene which has a dominant effect in terms of selection even though its known clinical effect is recessive.
Transport of Charged Particles in Turbulent Magnetic Fields
NASA Astrophysics Data System (ADS)
Parashar, T.; Subedi, P.; Sonsrettee, W.; Blasi, P.; Ruffolo, D. J.; Matthaeus, W. H.; Montgomery, D.; Chuychai, P.; Dmitruk, P.; Wan, M.; Chhiber, R.
2017-12-01
Magnetic fields permeate the Universe. They are found in planets, stars, galaxies, and the intergalactic medium. The magnetic field found in these astrophysical systems are usually chaotic, disordered, and turbulent. The investigation of the transport of cosmic rays in magnetic turbulence is a subject of considerable interest. One of the important aspects of cosmic ray transport is to understand their diffusive behavior and to calculate the diffusion coefficient in the presence of these turbulent fields. Research has most frequently concentrated on determining the diffusion coefficient in the presence of a mean magnetic field. Here, we will particularly focus on calculating diffusion coefficients of charged particles and magnetic field lines in a fully three-dimensional isotropic turbulent magnetic field with no mean field, which may be pertinent to many astrophysical situations. For charged particles in isotropic turbulence we identify different ranges of particle energy depending upon the ratio of the Larmor radius of the charged particle to the characteristic outer length scale of the turbulence. Different theoretical models are proposed to calculate the diffusion coefficient, each applicable to a distinct range of particle energies. The theoretical ideas are tested against results of detailed numerical experiments using Monte-Carlo simulations of particle propagation in stochastic magnetic fields. We also discuss two different methods of generating random magnetic field to study charged particle propagation using numerical simulation. One method is the usual way of generating random fields with a specified power law in wavenumber space, using Gaussian random variables. Turbulence, however, is non-Gaussian, with variability that comes in bursts called intermittency. We therefore devise a way to generate synthetic intermittent fields which have many properties of realistic turbulence. Possible applications of such synthetically generated intermittent fields are discussed.
ERIC Educational Resources Information Center
Glassman, Jill R.; Potter, Susan C.; Baumler, Elizabeth R.; Coyle, Karin K.
2015-01-01
Introduction: Group-randomized trials (GRTs) are one of the most rigorous methods for evaluating the effectiveness of group-based health risk prevention programs. Efficiently designing GRTs with a sample size that is sufficient for meeting the trial's power and precision goals while not wasting resources exceeding them requires estimates of the…
Segura-Correa, J C; Domínguez-Díaz, D; Avalos-Ramírez, R; Argaez-Sosa, J
2010-09-01
Knowledge of the intraherd correlation coefficient (ICC) and design (D) effect for infectious diseases could be of interest in sample size calculation and to provide the correct standard errors of prevalence estimates in cluster or two-stage samplings surveys. Information on 813 animals from 48 non-vaccinated cow-calf herds from North-eastern Mexico was used. The ICC for the bovine viral diarrhoea (BVD), infectious bovine rhinotracheitis (IBR), leptospirosis and neosporosis diseases were calculated using a Bayesian approach adjusting for the sensitivity and specificity of the diagnostic tests. The ICC and D values for BVD, IBR, leptospirosis and neosporosis were 0.31 and 5.91, 0.18 and 3.88, 0.22 and 4.53, and 0.11 and 2.68, respectively. The ICC and D values were different from 0 and D greater than 1, therefore large sample sizes are required to obtain the same precision in prevalence estimates than for a random simple sampling design. The report of ICC and D values is of great help in planning and designing two-stage sampling studies. 2010 Elsevier B.V. All rights reserved.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
Magnetic field line random walk in models and simulations of reduced magnetohydrodynamic turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snodin, A. P.; Ruffolo, D.; Oughton, S.
2013-12-10
The random walk of magnetic field lines is examined numerically and analytically in the context of reduced magnetohydrodynamic (RMHD) turbulence, which provides a useful description of plasmas dominated by a strong mean field, such as in the solar corona. A recently developed non-perturbative theory of magnetic field line diffusion is compared with the diffusion coefficients obtained by accurate numerical tracing of magnetic field lines for both synthetic models and direct numerical simulations of RMHD. Statistical analysis of an ensemble of trajectories confirms the applicability of the theory, which very closely matches the numerical field line diffusion coefficient as a functionmore » of distance z along the mean magnetic field for a wide range of the Kubo number R. This theory employs Corrsin's independence hypothesis, sometimes thought to be valid only at low R. However, the results demonstrate that it works well up to R = 10, both for a synthetic RMHD model and an RMHD simulation. The numerical results from the RMHD simulation are compared with and without phase randomization, demonstrating a clear effect of coherent structures on the field line random walk for a very low Kubo number.« less
The G matrix under fluctuating correlational mutation and selection.
Revell, Liam J
2007-08-01
Theoretical quantitative genetics provides a framework for reconstructing past selection and predicting future patterns of phenotypic differentiation. However, the usefulness of the equations of quantitative genetics for evolutionary inference relies on the evolutionary stability of the additive genetic variance-covariance matrix (G matrix). A fruitful new approach for exploring the evolutionary dynamics of G involves the use of individual-based computer simulations. Previous studies have focused on the evolution of the eigenstructure of G. An alternative approach employed in this paper uses the multivariate response-to-selection equation to evaluate the stability of G. In this approach, I measure similarity by the correlation between response-to-selection vectors due to random selection gradients. I analyze the dynamics of G under several conditions of correlational mutation and selection. As found in a previous study, the eigenstructure of G is stabilized by correlational mutation and selection. However, over broad conditions, instability of G did not result in a decreased consistency of the response to selection. I also analyze the stability of G when the correlation coefficients of correlational mutation and selection and the effective population size change through time. To my knowledge, no prior study has used computer simulations to investigate the stability of G when correlational mutation and selection fluctuate. Under these conditions, the eigenstructure of G is unstable under some simulation conditions. Different results are obtained if G matrix stability is assessed by eigenanalysis or by the response to random selection gradients. In this case, the response to selection is most consistent when certain aspects of the eigenstructure of G are least stable and vice versa.
Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi
2015-01-01
Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.
NASA Astrophysics Data System (ADS)
Wagner, Thorsten; Kroll, Alexandra; Wiemann, Martin; Lipinski, Hans-Gerd
2016-04-01
Darkfield and confocal laser scanning microscopy both allow for a simultaneous observation of live cells and single nanoparticles. Accordingly, a characterization of nanoparticle uptake and intracellular mobility appears possible within living cells. Single particle tracking makes it possible to characterize the particle and the surrounding cell. In case of free diffusion, the mean squared displacement for each trajectory of a nanoparticle can be measured which allows computing the corresponding diffusion coefficient and, if desired, converting it into the hydrodynamic diameter using the Stokes-Einstein equation and the viscosity of the fluid. However, within the more complex system of a cell's cytoplasm unrestrained diffusion is scarce and several other types of movements may occur. Thus, confined or anomalous diffusion (e.g. diffusion in porous media), active transport, and combinations thereof were described by several authors. To distinguish between these types of particle movement we developed an appropriate classification method, and simulated three types of particle motion in a 2D plane using a Monte Carlo approach: (1) normal diffusion, using random direction and step-length, (2) subdiffusion, using confinements like a reflective boundary with defined radius or reflective objects in the closer vicinity, and (3) superdiffusion, using a directed flow added to the normal diffusion. To simulate subdiffusion we devised a new method based on tracks of different length combined with equally probable obstacle interaction. Next we estimated the fractal dimension, elongation and the ratio of long-time / short-time diffusion coefficients. These features were used to train a random forests classification algorithm. The accuracy for simulated trajectories with 180 steps was 97% (95%-CI: 0.9481-0.9884). The balanced accuracy was 94%, 99% and 98% for normal-, sub- and superdiffusion, respectively. Nanoparticle tracking analysis was used with 100 nm polystyrene particles to get trajectories for normal diffusion. As a next step we identified diffusion types of nanoparticles in vital cells and incubated V79 fibroblasts with 50 nm gold nanoparticles, which appeared as intensely bright objects due to their surface plasmon resonance. The movement of particles in both the extracellular and intracellular space was observed by dark field and confocal laser scanning microscopy. After reducing background noise from the video it became possible to identify individual particle spots by a maximum detection algorithm and trace them using the robust single-particle tracking algorithm proposed by Jaqaman, which is able to handle motion heterogeneity and particle disappearance. The particle trajectories inside cells indicated active transport (superdiffusion) as well as subdiffusion. Eventually, the random forest classification algorithm, after being trained by the above simulations, successfully classified the trajectories observed in live cells.
Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Schwartz, C. S.
2017-12-01
Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.
The behaviour of share returns of football clubs: An econophysics approach
NASA Astrophysics Data System (ADS)
Ferreira, Paulo; Loures, Luís; Nunes, José Rato; Dionísio, Andreia
2017-04-01
Football is a sport that moves thousands of people and millions of euros. Since 1983, several clubs entered the stock markets with shares, and now twenty two clubs are listed in the Stoxx Football Index. In this study, we analyse the behaviour of the return rates of such shares, with Detrended Fluctuation Analysis and Detrended Cross-Correlation Analysis (and its correlation coefficient). With Detrended Fluctuation Analysis, we are able to observe that the shares of several clubs are far from the behaviour of a random walk, which is expected by the theory. Using Detrended Cross-Correlation Analysis, we calculate the cross correlations of clubs' returns with national indexes and then with the Stoxx Football Index. Although almost all of them are positive, they do not seem to be strong.
Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J
2017-06-01
In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision alternatives. Copyright © 2017 Elsevier Ltd. All rights reserved.
Scattering from a random layer of leaves in the physical optics limit
NASA Technical Reports Server (NTRS)
Lang, R. H.; Seker, S. S.; Le Vine, D. M.
1982-01-01
Backscatter of electromagnetic radiation from a layer of vegetation over flat lossy ground has been studied in collaborative research at the George Washingnton University and the Goddard Space Flight Center. In this work the vegetation is composed of leaves which are modeled by a random collection of lossy dielectric disks. Backscattering coefficients for the vegetation layer have been calculated in the case of disks whose diameter is large compared to wavelength. These backscattering coefficients are obtained in terms of the scattering amplitude of an individual disk by employing the distorted Born procedure. The scattering amplitude for a disk which is large compared to wavelength is then found by physical optic techniques. Computed results are interpreted in terms of dominant reflected and transmitted contributions from the disks and ground.
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.
1999-01-01
In this paper, a method for obtaining nonlinear stiffness coefficients in modal coordinates for geometrically nonlinear finite-element models is developed. The method requires application of a finite-element program with a geometrically non- linear static capability. The MSC/NASTRAN code is employed for this purpose. The equations of motion of a MDOF system are formulated in modal coordinates. A set of linear eigenvectors is used to approximate the solution of the nonlinear problem. The random vibration problem of the MDOF nonlinear system is then considered. The solutions obtained by application of two different versions of a stochastic linearization technique are compared with linear and exact (analytical) solutions in terms of root-mean-square (RMS) displacements and strains for a beam structure.
On S.N. Bernstein’s derivation of Mendel’s Law and ‘rediscovery’ of the Hardy-Weinberg distribution
Stark, Alan; Seneta, Eugene
2012-01-01
Around 1923 the soon-to-be famous Soviet mathematician and probabilist Sergei N. Bernstein started to construct an axiomatic foundation of a theory of heredity. He began from the premise of stationarity (constancy of type proportions) from the first generation of offspring. This led him to derive the Mendelian coefficients of heredity. It appears that he had no direct influence on the subsequent development of population genetics. A basic assumption of Bernstein was that parents coupled randomly to produce offspring. This paper shows that a simple model of non-random mating, which nevertheless embodies a feature of the Hardy-Weinberg Law, can produce Mendelian coefficients of heredity while maintaining the population distribution. How W. Johannsen’s monograph influenced Bernstein is discussed. PMID:22888285
Bayesian Meta-Analysis of Coefficient Alpha
ERIC Educational Resources Information Center
Brannick, Michael T.; Zhang, Nanhua
2013-01-01
The current paper describes and illustrates a Bayesian approach to the meta-analysis of coefficient alpha. Alpha is the most commonly used estimate of the reliability or consistency (freedom from measurement error) for educational and psychological measures. The conventional approach to meta-analysis uses inverse variance weights to combine…
FDTD analysis of the light extraction efficiency of OLEDs with a random scattering layer.
Kim, Jun-Whee; Jang, Ji-Hyang; Oh, Min-Cheol; Shin, Jin-Wook; Cho, Doo-Hee; Moon, Jae-Hyun; Lee, Jeong-Ik
2014-01-13
The light extraction efficiency of OLEDs with a nano-sized random scattering layer (RSL-OLEDs) was analyzed using the Finite Difference Time Domain (FDTD) method. In contrast to periodic diffraction patterns, the presence of an RSL suppresses the spectral shift with respect to the viewing angle. For FDTD simulation of RSL-OLEDs, a planar light source with a certain spatial and temporal coherence was incorporated, and the light extraction efficiency with respect to the fill factor of the RSL and the absorption coefficient of the material was investigated. The design results were compared to the experimental results of the RSL-OLEDs in order to confirm the usefulness of FDTD in predicting experimental results. According to our FDTD simulations, the light confined within the ITO-organic waveguide was quickly absorbed, and the absorption coefficients of ITO and RSL materials should be reduced in order to obtain significant improvement in the external quantum efficiency (EQE). When the extinction coefficient of ITO was 0.01, the EQE in the RSL-OLED was simulated to be enhanced by a factor of 1.8.
Determination of aerodynamic sensitivity coefficients in the transonic and supersonic regimes
NASA Technical Reports Server (NTRS)
Elbanna, Hesham M.; Carlson, Leland A.
1989-01-01
The quasi-analytical approach is developed to compute airfoil aerodynamic sensitivity coefficients in the transonic and supersonic flight regimes. Initial investigation verifies the feasibility of this approach as applied to the transonic small perturbation residual expression. Results are compared to those obtained by the direct (finite difference) approach and both methods are evaluated to determine their computational accuracies and efficiencies. The quasi-analytical approach is shown to be superior and worth further investigation.
Li, Chengwei; Zhan, Liwei
2015-08-01
To estimate the coefficient of friction between tire and runway surface during airplane touchdowns, we designed an experimental rig to simulate such events and to record the impact and friction forces being executed. Because of noise in the measured signals, we developed a filtering method that is based on the ensemble empirical mode decomposition and the bandwidth of probability density function of each intrinsic mode function to extract friction and impact force signals. We can quantify the coefficient of friction by calculating the maximum values of the filtered force signals. Signal measurements are recorded for different drop heights and tire rotational speeds, and the corresponding coefficient of friction is calculated. The result shows that the values of the coefficient of friction change only slightly. The random noise and experimental artifact are the major reason of the change.
Progress in radar snow research. [Brookings, South Dakota
NASA Technical Reports Server (NTRS)
Stiles, W. H.; Ulaby, F. T.; Fung, A. K.; Aslam, A.
1981-01-01
Multifrequency measurements of the radar backscatter from snow-covered terrain were made at several sites in Brookings, South Dakota, during the month of March of 1979. The data are used to examine the response of the scattering coefficient to the following parameters: (1) snow surface roughness, (2) snow liquid water content, and (3) snow water equivalent. The results indicate that the scattering coefficient is insensitive to snow surface roughness if the snow is drv. For wet snow, however, surface roughness can have a strong influence on the magnitude of the scattering coefficient. These observations confirm the results predicted by a theoretical model that describes the snow as a volume of Rayleig scatterers, bounded by a Gaussian random surface. In addition, empirical models were developed to relate the scattering coefficient to snow liquid water content and the dependence of the scattering coefficient on water equivalent was evaluated for both wet and dry snow conditions.
Forghani, Masoomeh; Ghanbari Hashem Abadi, Bahram Ali
2016-06-01
The aim of the present study was to evaluate the effect of group psychotherapy with transactional analysis (TA) approach on emotional intelligence (EI), executive functions and substance dependency among drug-addicts at rehabilitation centers in Mashhad city, Iran, in 2013. In this quasi-experimental study with pretest, posttest, case- control stages, 30 patients were selected from a rehabilitation center and randomly divided into two groups. The case group received 12 sessions of group psychotherapy with transactional analysis approach. Then the effects of independent variable (group psychotherapy with TA approach) on EI, executive function and drug dependency were assessed. The Bar-on test was used for EI, Stroop test for measuring executive function and morphine test, meth-amphetamines and B2 test for evaluating drug dependency. Data were analyzed using multifactorial covariance analysis, Levenes' analysis, MANCOVA, t-student and Pearson correlation coefficient tests t with SPSS software. Our results showed that group psychotherapy with the TA approach was effective in improving EI, executive functions and decreasing drug dependency (P < 0.05). The result of this study showed that group psychotherapy with TA approach has significant effects on addicts and prevents addiction recurrence by improving the coping capabilities and some mental functions of the subjects. However, there are some limitations regarding this study including follow-up duration and sample size.
Social relationships and health: the relative roles of family functioning and social support.
Franks, P; Campbell, T L; Shields, C G
1992-04-01
The associations between social relationships and health have been examined using two major research traditions. Using a social epidemiological approach, much research has shown the beneficial effect of social supports on health and health behaviors. Family interaction research, which has grown out of a more clinical tradition, has shown the complex effects of family functioning on health, particularly mental health. No studies have examined the relative power of these two approaches in explicating the connections between social relationships and health. We hypothesized that social relationships (social support and family functioning) would exert direct and indirect (through depressive symptoms) effects on health behaviors. We also hypothesized that the effects of social relationships on health would be more powerfully explicated by family functioning than by social support. We mailed a pilot survey to a random sample of patients attending a family practice center, including questions on depressive symptoms, cardiovascular health behaviors, demographics, social support using the ISEL scale, and family functioning using the FEICS scale. FEICS is a self-report questionnaire designed to assess family emotional involvement and criticism, the media elements of family expressed emotion. Eighty-three useable responses were obtained. Regression analyses and structural modelling showed both direct and indirect statistically significant paths from social relationships to health behaviors. Family criticism was directly associated (standardized coefficient = 0.29) with depressive symptoms, and family emotional involvement was directly associated with both depressive symptoms (coefficient = 0.35) and healthy cardiovascular behaviors (coefficient = 0.32). The results support the primacy of family functioning factors in understanding the associations among social relationships, mental health, and health behaviors. The contrasting relationships between emotional involvement and depressive symptoms on the one hand and emotional involvement and health behaviors on the other suggest the need for a more complex model to understand the connections between social relationships and health.
ERIC Educational Resources Information Center
Pals, Sherri L.; Beaty, Brenda L.; Posner, Samuel F.; Bull, Sheana S.
2009-01-01
Studies designed to evaluate HIV and STD prevention interventions often involve random assignment of groups such as neighborhoods or communities to study conditions (e.g., to intervention or control). Investigators who design group-randomized trials (GRTs) must take the expected intraclass correlation coefficient (ICC) into account in sample size…
ERIC Educational Resources Information Center
Brandon, Paul R.; Harrison, George M.; Lawton, Brian E.
2013-01-01
When evaluators plan site-randomized experiments, they must conduct the appropriate statistical power analyses. These analyses are most likely to be valid when they are based on data from the jurisdictions in which the studies are to be conducted. In this method note, we provide software code, in the form of a SAS macro, for producing statistical…
NASA Technical Reports Server (NTRS)
Holms, A. G.
1977-01-01
As many as three iterated statistical model deletion procedures were considered for an experiment. Population model coefficients were chosen to simulate a saturated 2 to the 4th power experiment having an unfavorable distribution of parameter values. Using random number studies, three model selection strategies were developed, namely, (1) a strategy to be used in anticipation of large coefficients of variation, approximately 65 percent, (2) a strategy to be sued in anticipation of small coefficients of variation, 4 percent or less, and (3) a security regret strategy to be used in the absence of such prior knowledge.
Phase retrieval in generalized optical interferometry systems.
Farriss, Wesley E; Fienup, James R; Malhotra, Tanya; Vamivakas, A Nick
2018-02-05
Modal analysis of an optical field via generalized interferometry (GI) is a novel technique that treats said field as a linear superposition of transverse modes and recovers the amplitudes of modal weighting coefficients. We use phase retrieval by nonlinear optimization to recover the phase of these modal weighting coefficients. Information diversity increases the robustness of the algorithm by better constraining the solution. Additionally, multiple sets of random starting phase values assist the algorithm in overcoming local minima. The algorithm was able to recover nearly all coefficient phases for simulated fields consisting of up to 21 superpositioned Hermite Gaussian modes from simulated data and proved to be resilient to shot noise.
Choosing the best index for the average score intraclass correlation coefficient.
Shieh, Gwowen
2016-09-01
The intraclass correlation coefficient (ICC)(2) index from a one-way random effects model is widely used to describe the reliability of mean ratings in behavioral, educational, and psychological research. Despite its apparent utility, the essential property of ICC(2) as a point estimator of the average score intraclass correlation coefficient is seldom mentioned. This article considers several potential measures and compares their performance with ICC(2). Analytical derivations and numerical examinations are presented to assess the bias and mean square error of the alternative estimators. The results suggest that more advantageous indices can be recommended over ICC(2) for their theoretical implication and computational ease.
Ebel, B.A.; Mirus, B.B.; Heppner, C.S.; VanderKwaak, J.E.; Loague, K.
2009-01-01
Distributed hydrologic models capable of simulating fully-coupled surface water and groundwater flow are increasingly used to examine problems in the hydrologic sciences. Several techniques are currently available to couple the surface and subsurface; the two most frequently employed approaches are first-order exchange coefficients (a.k.a., the surface conductance method) and enforced continuity of pressure and flux at the surface-subsurface boundary condition. The effort reported here examines the parameter sensitivity of simulated hydrologic response for the first-order exchange coefficients at a well-characterized field site using the fully coupled Integrated Hydrology Model (InHM). This investigation demonstrates that the first-order exchange coefficients can be selected such that the simulated hydrologic response is insensitive to the parameter choice, while simulation time is considerably reduced. Alternatively, the ability to choose a first-order exchange coefficient that intentionally decouples the surface and subsurface facilitates concept-development simulations to examine real-world situations where the surface-subsurface exchange is impaired. While the parameters comprising the first-order exchange coefficient cannot be directly estimated or measured, the insensitivity of the simulated flow system to these parameters (when chosen appropriately) combined with the ability to mimic actual physical processes suggests that the first-order exchange coefficient approach can be consistent with a physics-based framework. Copyright ?? 2009 John Wiley & Sons, Ltd.
Daneyko, Anton; Hlushkou, Dzmitry; Baranau, Vasili; Khirevich, Siarhei; Seidel-Morgenstern, Andreas; Tallarek, Ulrich
2015-08-14
In recent years, chromatographic columns packed with core-shell particles have been widely used for efficient and fast separations at comparatively low operating pressure. However, the influence of the porous shell properties on the mass transfer kinetics in core-shell packings is still not fully understood. We report on results obtained with a modeling approach to simulate three-dimensional advective-diffusive transport in bulk random packings of monosized core-shell particles, covering a range of reduced mobile phase flow velocities from 0.5 up to 1000. The impact of the effective diffusivity of analyte molecules in the porous shell and the shell thickness on the resulting plate height was investigated. An extension of Giddings' theory of coupled eddy dispersion to account for retention of analyte molecules due to stagnant regions in porous shells with zero mobile phase flow velocity is presented. The plate height equation involving a modified eddy dispersion term excellently describes simulated data obtained for particle-packings with varied shell thickness and shell diffusion coefficient. It is confirmed that the model of trans-particle mass transfer resistance of core-shell particles by Kaczmarski and Guiochon [42] is applicable up to a constant factor. We analyze individual contributions to the plate height from different mass transfer mechanisms in dependence of the shell parameters. The simulations demonstrate that a reduction of plate height in packings of core-shell relative to fully porous particles arises mainly due to reduced trans-particle mass transfer resistance and transchannel eddy dispersion. Copyright © 2015 Elsevier B.V. All rights reserved.
Effect of Diabetes Sleep Education for T2DM Who Sleep After Midnight: A Pilot Study from China.
Li, Mingzhen; Li, Daiqing; Tang, Yunzhao; Meng, Lingling; Mao, Cuixiu; Sun, Lirong; Chang, Baocheng; Chen, Liming
2018-02-01
Our prior study showed that patients with sleep disorders had poor blood pressure (BP), glycemic control, and more severe complications. Therefore, sleep is very important for diabetic control. Our work was to investigate whether individualized diabetes sleep education significantly improve sleep quality and glycemic control in type 2 diabetic patients who sleep after midnight and potential mechanism by a randomized parallel interventional study. T2D patients were randomly recruited to an intervention or control group. Patients received structured special diabetes sleep education program with 3-month follow-up. Pittsburg Sleep Quality Index (PSQI) was scored for each participant. Demographic data, HbA1c, biochemical, and some hormones were also examined. SPSS 13.0 was used for statistical analysis. One hundred patients were approached, and 45 were enrolled into our trial. Eventually, 31 patients completed the study. Patients in the intervention group greatly improved their sleep hygiene. After intervention, PSQI scores were lowered significantly (-1.48 ± 0.88 vs. -0.51 ± 0.71, P < 0.001), as well as significant reduction of HbA1c (-1.5 ± 0.55 vs. -1.11 ± 0.47, P < 0.05). Fasting plasma glucose was also lowered significantly. Homeostasis model assessment of insulin resistance was reduced significantly (-1.29 ± 0.97 vs. 1.04 ± 0.91, P < 0.01). Serum concentrations for interleukin (IL)-6, cortisol, and ghrelin were decreased significantly. Ghrelin (coefficients -0.65, P < 0.001), cortisol (coefficients -0.38, P < 0.05), and IL-6 (coefficients 0.452, P < 0.05) were correlated with HbA1c improvement. The change of ghrelin was negatively associated with the improvement of HbA1c. Diabetes sleep education could improve sleep quality, better blood glucose and BP, and decrease insulin resistance through healthier sleep hygiene. Lower serum concentration of ghrelin might be partly involved in the reduction of HbA1c.
Fischer, A; Friggens, N C; Berry, D P; Faverdin, P
2018-07-01
The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of measured NEI. Among all compared mixed models, the variance of the cow-specific part of predicted NEI represented between 53% and 59% of the variance of REI estimated from the lactation average model or between 4% and 5% of the variance of measured NEI. The remaining 41% to 47% of the variance of REI estimated with the lactation average model may therefore reflect model fitting errors or measurement errors. In conclusion, the use of a mixed model framework with cow-specific random regressions seems to be a promising method to isolate the cow-specific component of REI in dairy cows.
Multipartite nonlocality and random measurements
NASA Astrophysics Data System (ADS)
de Rosier, Anna; Gruca, Jacek; Parisio, Fernando; Vértesi, Tamás; Laskowski, Wiesław
2017-07-01
We present an exhaustive numerical analysis of violations of local realism by families of multipartite quantum states. As an indicator of nonclassicality we employ the probability of violation for randomly sampled observables. Surprisingly, it rapidly increases with the number of parties or settings and even for relatively small values local realism is violated for almost all observables. We have observed this effect to be typical in the sense that it emerged for all investigated states including some with randomly drawn coefficients. We also present the probability of violation as a witness of genuine multipartite entanglement.
Random Forest Application for NEXRAD Radar Data Quality Control
NASA Astrophysics Data System (ADS)
Keem, M.; Seo, B. C.; Krajewski, W. F.
2017-12-01
Identification and elimination of non-meteorological radar echoes (e.g., returns from ground, wind turbines, and biological targets) are the basic data quality control steps before radar data use in quantitative applications (e.g., precipitation estimation). Although WSR-88Ds' recent upgrade to dual-polarization has enhanced this quality control and echo classification, there are still challenges to detect some non-meteorological echoes that show precipitation-like characteristics (e.g., wind turbine or anomalous propagation clutter embedded in rain). With this in mind, a new quality control method using Random Forest is proposed in this study. This classification algorithm is known to produce reliable results with less uncertainty. The method introduces randomness into sampling and feature selections and integrates consequent multiple decision trees. The multidimensional structure of the trees can characterize the statistical interactions of involved multiple features in complex situations. The authors explore the performance of Random Forest method for NEXRAD radar data quality control. Training datasets are selected using several clear cases of precipitation and non-precipitation (but with some non-meteorological echoes). The model is structured using available candidate features (from the NEXRAD data) such as horizontal reflectivity, differential reflectivity, differential phase shift, copolar correlation coefficient, and their horizontal textures (e.g., local standard deviation). The influence of each feature on classification results are quantified by variable importance measures that are automatically estimated by the Random Forest algorithm. Therefore, the number and types of features in the final forest can be examined based on the classification accuracy. The authors demonstrate the capability of the proposed approach using several cases ranging from distinct to complex rain/no-rain events and compare the performance with the existing algorithms (e.g., MRMS). They also discuss operational feasibility based on the observed strength and weakness of the method.
Kim, Kyungmok; Lee, Jaewook
2016-01-01
This paper describes a sliding friction model for an electro-deposited coating. Reciprocating sliding tests using ball-on-flat plate test apparatus are performed to determine an evolution of the kinetic friction coefficient. The evolution of the friction coefficient is classified into the initial running-in period, steady-state sliding, and transition to higher friction. The friction coefficient during the initial running-in period and steady-state sliding is expressed as a simple linear function. The friction coefficient in the transition to higher friction is described with a mathematical model derived from Kachanov-type damage law. The model parameters are then estimated using the Markov Chain Monte Carlo (MCMC) approach. It is identified that estimated friction coefficients obtained by MCMC approach are in good agreement with measured ones. PMID:28773359
ON THE APPROACH TO NON-EQUILIBRIUM STATIONARY STATES AND THE THEORY OF TRANSPORT COEFFICIENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balescu, R.
1961-07-01
A general formula for the time dependent electric current arising from a constant electric field is derived similarly to Kubo's theory. This formula connects the time dependence of the current to the singularities of the resolvent of Liouville's operator of a classical system. Direct contact is made with the general theory of approach to equilibrium developed by Prigogine and his coworkers. It constitutes a framework for a diagram expansion of transport coefficients. A proof of the existence of a stationary state and of its stability (to first order in the field) are given. It is rigorously shown that, whereas themore » approach to the stationary state is in general governed by complicated non-markoffian equations, the stationary state itself (and thus the calculation of transport coefficients) is always determined by an asymptotic cross section. This implies that transport coefficients can always be calculated from a markoffian Boltzmann-like equation even in situations in which that equation does not describe properly the approach to the stationary state. (auth)« less
On marker-based parentage verification via non-linear optimization.
Boerner, Vinzent
2017-06-15
Parentage verification by molecular markers is mainly based on short tandem repeat markers. Single nucleotide polymorphisms (SNPs) as bi-allelic markers have become the markers of choice for genotyping projects. Thus, the subsequent step is to use SNP genotypes for parentage verification as well. Recent developments of algorithms such as evaluating opposing homozygous SNP genotypes have drawbacks, for example the inability of rejecting all animals of a sample of potential parents. This paper describes an algorithm for parentage verification by constrained regression which overcomes the latter limitation and proves to be very fast and accurate even when the number of SNPs is as low as 50. The algorithm was tested on a sample of 14,816 animals with 50, 100 and 500 SNP genotypes randomly selected from 40k genotypes. The samples of putative parents of these animals contained either five random animals, or four random animals and the true sire. Parentage assignment was performed by ranking of regression coefficients, or by setting a minimum threshold for regression coefficients. The assignment quality was evaluated by the power of assignment (P[Formula: see text]) and the power of exclusion (P[Formula: see text]). If the sample of putative parents contained the true sire and parentage was assigned by coefficient ranking, P[Formula: see text] and P[Formula: see text] were both higher than 0.99 for the 500 and 100 SNP genotypes, and higher than 0.98 for the 50 SNP genotypes. When parentage was assigned by a coefficient threshold, P[Formula: see text] was higher than 0.99 regardless of the number of SNPs, but P[Formula: see text] decreased from 0.99 (500 SNPs) to 0.97 (100 SNPs) and 0.92 (50 SNPs). If the sample of putative parents did not contain the true sire and parentage was rejected using a coefficient threshold, the algorithm achieved a P[Formula: see text] of 1 (500 SNPs), 0.99 (100 SNPs) and 0.97 (50 SNPs). The algorithm described here is easy to implement, fast and accurate, and is able to assign parentage using genomic marker data with a size as low as 50 SNPs.
Properties of a new small-world network with spatially biased random shortcuts
NASA Astrophysics Data System (ADS)
Matsuzawa, Ryo; Tanimoto, Jun; Fukuda, Eriko
2017-11-01
This paper introduces a small-world (SW) network with a power-law distance distribution that differs from conventional models in that it uses completely random shortcuts. By incorporating spatial constraints, we analyze the divergence of the proposed model from conventional models in terms of fundamental network properties such as clustering coefficient, average path length, and degree distribution. We find that when the spatial constraint more strongly prohibits a long shortcut, the clustering coefficient is improved and the average path length increases. We also analyze the spatial prisoner's dilemma (SPD) games played on our new SW network in order to understand its dynamical characteristics. Depending on the basis graph, i.e., whether it is a one-dimensional ring or a two-dimensional lattice, and the parameter controlling the prohibition of long-distance shortcuts, the emergent results can vastly differ.
An exact solution of solute transport by one-dimensional random velocity fields
Cvetkovic, V.D.; Dagan, G.; Shapiro, A.M.
1991-01-01
The problem of one-dimensional transport of passive solute by a random steady velocity field is investigated. This problem is representative of solute movement in porous media, for example, in vertical flow through a horizontally stratified formation of variable porosity with a constant flux at the soil surface. Relating moments of particle travel time and displacement, exact expressions for the advection and dispersion coefficients in the Focker-Planck equation are compared with the perturbation results for large distances. The first- and second-order approximations for the dispersion coefficient are robust for a lognormal velocity field. The mean Lagrangian velocity is the harmonic mean of the Eulerian velocity for large distances. This is an artifact of one-dimensional flow where the continuity equation provides for a divergence free fluid flux, rather than a divergence free fluid velocity. ?? 1991 Springer-Verlag.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Study Of Genetic Diversity Between Grasspea Landraces Using Morphological And Molecular Marker
NASA Astrophysics Data System (ADS)
Sedehi, Abbasali Vahabi; Lotfi, Asefeh; Solooki, Mahmood
2008-01-01
Grass pea is a beneficial crop to Iran since it has some major advantageous such as high grain and forage quality, high drought tolerance and medium level of salinity tolerance and a good native germplasm variation which accessible for breeding programs. This study was carried out to evaluate morphological traits of the grass pea landraces using a randomized complete block design with 3 replications at Research Farm of Isfahan University of Technology. To evaluate genetic diversity of 14 grass pea landraces from various locations in Iran were investigated using 32 RAPD & ISJ primers at Biocenter of University of Zabol. Analysis of variance indicated a highly significant differences among 14 grass pea landrace for the morphological traits. Average of polymorphism percentage of RAPD primer was 73.9%. Among used primer, 12 random primers showed polymorphism and a total of 56 different bands were observed in the genotypes. Jafar-abad and Sar-chahan genotypes with similarity coefficient of 66% and Khoram-abad 2 and Khoram-abad 7 genotypes with similarity coefficient of 3% were the most related and the most distinct genotypes, respectively. Fourteen primers out of 17 semi random primers produced 70 polymorphic bands which included 56% of the total 126 produced bands. Genetic relatedness among population was investigated using Jacard coefficient and unweighted pair group mean analysis (UPGMA) algorithm. The result of this research verified possibility of use of RAPD & ISJ markers for estimation of genetic diversity, management of genetic resources and determination of repetitive accessions in grass pea.
NASA Astrophysics Data System (ADS)
Alawadi, Wisam; Al-Rekabi, Wisam S.; Al-Aboodi, Ali H.
2018-03-01
The Shiono and Knight Method (SKM) is widely used to predict the lateral distribution of depth-averaged velocity and boundary shear stress for flows in compound channels. Three calibrating coefficients need to be estimated for applying the SKM, namely eddy viscosity coefficient ( λ), friction factor ( f) and secondary flow coefficient ( k). There are several tested methods which can satisfactorily be used to estimate λ, f. However, the calibration of secondary flow coefficients k to account for secondary flow effects correctly is still problematic. In this paper, the calibration of secondary flow coefficients is established by employing two approaches to estimate correct values of k for simulating asymmetric compound channel with different side slopes of the internal wall. The first approach is based on Abril and Knight (2004) who suggest fixed values for main channel and floodplain regions. In the second approach, the equations developed by Devi and Khatua (2017) that relate the variation of the secondary flow coefficients with the relative depth ( β) and width ratio ( α) are used. The results indicate that the calibration method developed by Devi and Khatua (2017) is a better choice for calibrating the secondary flow coefficients than using the first approach which assumes a fixed value of k for different flow depths. The results also indicate that the boundary condition based on the shear force continuity can successfully be used for simulating rectangular compound channels, while the continuity of depth-averaged velocity and its gradient is accepted boundary condition in simulations of trapezoidal compound channels. However, the SKM performance for predicting the boundary shear stress over the shear layer region may not be improved by only imposing the suitable calibrated values of secondary flow coefficients. This is because difficulties of modelling the complex interaction that develops between the flows in the main channel and on the floodplain in this region.
NASA Astrophysics Data System (ADS)
Avendaño-Valencia, Luis David; Fassois, Spilios D.
2017-07-01
The study focuses on vibration response based health monitoring for an operating wind turbine, which features time-dependent dynamics under environmental and operational uncertainty. A Gaussian Mixture Model Random Coefficient (GMM-RC) model based Structural Health Monitoring framework postulated in a companion paper is adopted and assessed. The assessment is based on vibration response signals obtained from a simulated offshore 5 MW wind turbine. The non-stationarity in the vibration signals originates from the continually evolving, due to blade rotation, inertial properties, as well as the wind characteristics, while uncertainty is introduced by random variations of the wind speed within the range of 10-20 m/s. Monte Carlo simulations are performed using six distinct structural states, including the healthy state and five types of damage/fault in the tower, the blades, and the transmission, with each one of them characterized by four distinct levels. Random vibration response modeling and damage diagnosis are illustrated, along with pertinent comparisons with state-of-the-art diagnosis methods. The results demonstrate consistently good performance of the GMM-RC model based framework, offering significant performance improvements over state-of-the-art methods. Most damage types and levels are shown to be properly diagnosed using a single vibration sensor.
NASA Astrophysics Data System (ADS)
Iida, S.
1991-03-01
Using statistical scattering theory, we calculate the average and the variance of the conductance coefficients at zero temperature for a small disordered metallic wire composed of three arms. Each arm is coupled at the end to a perfectly conducting lead. The disorder is modeled by a microscopic random Hamiltonian belonging to the Gaussian orthogonal ensemble. As the coupling strength of the third arm (voltage probe) is increased, the variance of the conductance coefficient of the main track changes from the universal value of the two-lead geometry to that of the three-lead geometry. The variance of the resistance coefficient is strongly affected by the coupling strength of the arm whose resistance is being measured and has a relatively weak dependence on those of the other two arms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, H.; Chang, C.; Cheng, H. H., E-mail: hhcheng@ntu.edu.tw
We report an investigation on the absorption mechanism of a GeSn photodetector with 2.4% Sn composition in the active region. Responsivity is measured and absorption coefficient is calculated. Square root of absorption coefficient linearly depends on photon energy indicating an indirect transition. However, the absorption coefficient is found to be at least one order of magnitude higher than that of most other indirect materials, suggesting that the indirect optical absorption transition cannot be assisted only by phonon. Our analysis of absorption measurements by other groups on the same material system showed the values of absorption coefficient on the same ordermore » of magnitude. Our study reveals that the strong enhancement of absorption for the indirect optical transition is the result of alloy disorder from the incorporation of the much larger Sn atoms into the Ge lattice that are randomly distributed.« less
Formation of parametric images using mixed-effects models: a feasibility study.
Huang, Husan-Ming; Shih, Yi-Yu; Lin, Chieh
2016-03-01
Mixed-effects models have been widely used in the analysis of longitudinal data. By presenting the parameters as a combination of fixed effects and random effects, mixed-effects models incorporating both within- and between-subject variations are capable of improving parameter estimation. In this work, we demonstrate the feasibility of using a non-linear mixed-effects (NLME) approach for generating parametric images from medical imaging data of a single study. By assuming that all voxels in the image are independent, we used simulation and animal data to evaluate whether NLME can improve the voxel-wise parameter estimation. For testing purposes, intravoxel incoherent motion (IVIM) diffusion parameters including perfusion fraction, pseudo-diffusion coefficient and true diffusion coefficient were estimated using diffusion-weighted MR images and NLME through fitting the IVIM model. The conventional method of non-linear least squares (NLLS) was used as the standard approach for comparison of the resulted parametric images. In the simulated data, NLME provides more accurate and precise estimates of diffusion parameters compared with NLLS. Similarly, we found that NLME has the ability to improve the signal-to-noise ratio of parametric images obtained from rat brain data. These data have shown that it is feasible to apply NLME in parametric image generation, and the parametric image quality can be accordingly improved with the use of NLME. With the flexibility to be adapted to other models or modalities, NLME may become a useful tool to improve the parametric image quality in the future. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Prediction of drug synergy in cancer using ensemble-based machine learning techniques
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder
2018-04-01
Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.
Reduced-Drift Virtual Gyro from an Array of Low-Cost Gyros.
Vaccaro, Richard J; Zaki, Ahmed S
2017-02-11
A Kalman filter approach for combining the outputs of an array of high-drift gyros to obtain a virtual lower-drift gyro has been known in the literature for more than a decade. The success of this approach depends on the correlations of the random drift components of the individual gyros. However, no method of estimating these correlations has appeared in the literature. This paper presents an algorithm for obtaining the statistical model for an array of gyros, including the cross-correlations of the individual random drift components. In order to obtain this model, a new statistic, called the "Allan covariance" between two gyros, is introduced. The gyro array model can be used to obtain the Kalman filter-based (KFB) virtual gyro. Instead, we consider a virtual gyro obtained by taking a linear combination of individual gyro outputs. The gyro array model is used to calculate the optimal coefficients, as well as to derive a formula for the drift of the resulting virtual gyro. The drift formula for the optimal linear combination (OLC) virtual gyro is identical to that previously derived for the KFB virtual gyro. Thus, a Kalman filter is not necessary to obtain a minimum drift virtual gyro. The theoretical results of this paper are demonstrated using simulated as well as experimental data. In experimental results with a 28-gyro array, the OLC virtual gyro has a drift spectral density 40 times smaller than that obtained by taking the average of the gyro signals.
Kong, Kaimeng; Zhang, Lulu; Huang, Lisu; Tao, Yexuan
2017-05-01
Image-assisted dietary assessment methods are frequently used to record individual eating habits. This study tested the validity of a smartphone-based photographic food recording approach by comparing the results obtained with those of a weighed food record. We also assessed the practicality of the method by using it to measure the energy and nutrient intake of college students. The experiment was implemented in two phases, each lasting 2 weeks. In the first phase, a labelled menu and a photograph database were constructed. The energy and nutrient content of 31 randomly selected dishes in three different portion sizes were then estimated by the photograph-based method and compared with a weighed food record. In the second phase, we combined the smartphone-based photographic method with the WeChat smartphone application and applied this to 120 randomly selected participants to record their energy and nutrient intake. The Pearson correlation coefficients for energy, protein, fat, and carbohydrate content between the weighed and the photographic food record were 0.997, 0.936, 0.996, and 0.999, respectively. Bland-Altman plots showed good agreement between the two methods. The estimated protein, fat, and carbohydrate intake by participants was in accordance with values in the Chinese Residents' Nutrition and Chronic Disease report (2015). Participants expressed satisfaction with the new approach and the compliance rate was 97.5%. The smartphone-based photographic dietary assessment method combined with the WeChat instant messaging application was effective and practical for use by young people.
Random parameter models for accident prediction on two-lane undivided highways in India.
Dinu, R R; Veeraragavan, A
2011-02-01
Generalized linear modeling (GLM), with the assumption of Poisson or negative binomial error structure, has been widely employed in road accident modeling. A number of explanatory variables related to traffic, road geometry, and environment that contribute to accident occurrence have been identified and accident prediction models have been proposed. The accident prediction models reported in literature largely employ the fixed parameter modeling approach, where the magnitude of influence of an explanatory variable is considered to be fixed for any observation in the population. Similar models have been proposed for Indian highways too, which include additional variables representing traffic composition. The mixed traffic on Indian highways comes with a lot of variability within, ranging from difference in vehicle types to variability in driver behavior. This could result in variability in the effect of explanatory variables on accidents across locations. Random parameter models, which can capture some of such variability, are expected to be more appropriate for the Indian situation. The present study is an attempt to employ random parameter modeling for accident prediction on two-lane undivided rural highways in India. Three years of accident history, from nearly 200 km of highway segments, is used to calibrate and validate the models. The results of the analysis suggest that the model coefficients for traffic volume, proportion of cars, motorized two-wheelers and trucks in traffic, and driveway density and horizontal and vertical curvatures are randomly distributed across locations. The paper is concluded with a discussion on modeling results and the limitations of the present study. Copyright © 2010 Elsevier Ltd. All rights reserved.
Transfer having a coupling coefficient higher than its active material
NASA Technical Reports Server (NTRS)
Lesieutre, George A. (Inventor); Davis, Christopher L. (Inventor)
2001-01-01
A coupling coefficient is a measure of the effectiveness with which a shape-changing material (or a device employing such a material) converts the energy in an imposed signal to useful mechanical energy. Device coupling coefficients are properties of the device and, although related to the material coupling coefficients, are generally different from them. This invention describes a class of devices wherein the apparent coupling coefficient can, in principle, approach 1.0, corresponding to perfect electromechanical energy conversion. The key feature of this class of devices is the use of destabilizing mechanical pre-loads to counter inherent stiffness. The approach is illustrated for piezoelectric and thermoelectrically actuated devices. The invention provides a way to simultaneously increase both displacement and force, distinguishing it from alternatives such as motion amplification, and allows transducer designers to achieve substantial performance gains for actuator and sensor devices.
Calculation of thermal expansion coefficient of glasses based on topological constraint theory
NASA Astrophysics Data System (ADS)
Zeng, Huidan; Ye, Feng; Li, Xiang; Wang, Ling; Yang, Bin; Chen, Jianding; Zhang, Xianghua; Sun, Luyi
2016-10-01
In this work, the thermal expansion behavior and the structure configuration evolution of glasses were studied. Degree of freedom based on the topological constraint theory is correlated with configuration evolution; considering the chemical composition and the configuration change, the analytical equation for calculating the thermal expansion coefficient of glasses from degree of freedom was derived. The thermal expansion of typical silicate and chalcogenide glasses was examined by calculating their thermal expansion coefficients (TEC) using the approach stated above. The results showed that this approach was energetically favorable for glass materials and revealed the corresponding underlying essence from viewpoint of configuration entropy. This work establishes a configuration-based methodology to calculate the thermal expansion coefficient of glasses that, lack periodic order.
ERIC Educational Resources Information Center
Camporesi, Roberto
2011-01-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of…
The Effect of Acidity Coefficient on Crystallization Behavior of Blast Furnace Slag Fibers
NASA Astrophysics Data System (ADS)
Tian, Tie-Lei; Zhang, Yu-Zhu; Xing, Hong-wei; Li, Jie; Zhang, Zun-Qian
2018-01-01
The chemical structure of mineral wool fiber was investigated by using Fourier Transform Infrared Spectroscopy (FTIR). Next, the glass transition temperature and the crystallization temperature of the fibers were studied. Finally, the crystallization kinetics of fiber was studied. The results show that the chemical bond structure of fibers gets more random with the increase of acidity coefficient. The crystallization phases of the fibers are mainly melilites, and also a few anorthites and diopsides. The growth mechanism of the crystals is three dimensional. The fibers with acidity coefficient of 1.2 exhibit the best thermal stability and is hard to crystallize as it has the maximum aviation energy of crystallization kinetics.
NASA Technical Reports Server (NTRS)
Claassen, J. P.; Fung, A. K.
1977-01-01
The radar equation for incoherent scenes is derived and scattering coefficients are introduced in a systematic way to account for the complete interaction between the incident wave and the random scene. Intensity (power) and correlation techniques similar to that for coherent targets are proposed to measure all the scattering parameters. The sensitivity of the intensity technique to various practical realizations of the antenna polarization requirements is evaluated by means of computer simulated measurements, conducted with a scattering characteristic similar to that of the sea. It was shown that for scenes satisfying reciprocity one must admit three new cross-correlation scattering coefficients in addition to the commonly measured autocorrelation coefficients.
Identifying Bearing Rotodynamic Coefficients Using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Miller, Brad A.; Howard, Samuel A.
2008-01-01
An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter's performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.
A novel approach to assess the treatment response using Gaussian random field in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Mengdie; Guo, Ning; Hu, Guangshu
2016-02-15
Purpose: The assessment of early therapeutic response to anticancer therapy is vital for treatment planning and patient management in clinic. With the development of personal treatment plan, the early treatment response, especially before any anatomically apparent changes after treatment, becomes urgent need in clinic. Positron emission tomography (PET) imaging serves an important role in clinical oncology for tumor detection, staging, and therapy response assessment. Many studies on therapy response involve interpretation of differences between two PET images, usually in terms of standardized uptake values (SUVs). However, the quantitative accuracy of this measurement is limited. This work proposes a statistically robustmore » approach for therapy response assessment based on Gaussian random field (GRF) to provide a statistically more meaningful scale to evaluate therapy effects. Methods: The authors propose a new criterion for therapeutic assessment by incorporating image noise into traditional SUV method. An analytical method based on the approximate expressions of the Fisher information matrix was applied to model the variance of individual pixels in reconstructed images. A zero mean unit variance GRF under the null hypothesis (no response to therapy) was obtained by normalizing each pixel of the post-therapy image with the mean and standard deviation of the pretherapy image. The performance of the proposed method was evaluated by Monte Carlo simulation, where XCAT phantoms (128{sup 2} pixels) with lesions of various diameters (2–6 mm), multiple tumor-to-background contrasts (3–10), and different changes in intensity (6.25%–30%) were used. The receiver operating characteristic curves and the corresponding areas under the curve were computed for both the proposed method and the traditional methods whose figure of merit is the percentage change of SUVs. The formula for the false positive rate (FPR) estimation was developed for the proposed therapy response assessment utilizing local average method based on random field. The accuracy of the estimation was validated in terms of Euler distance and correlation coefficient. Results: It is shown that the performance of therapy response assessment is significantly improved by the introduction of variance with a higher area under the curve (97.3%) than SUVmean (91.4%) and SUVmax (82.0%). In addition, the FPR estimation serves as a good prediction for the specificity of the proposed method, consistent with simulation outcome with ∼1 correlation coefficient. Conclusions: In this work, the authors developed a method to evaluate therapy response from PET images, which were modeled as Gaussian random field. The digital phantom simulations demonstrated that the proposed method achieved a large reduction in statistical variability through incorporating knowledge of the variance of the original Gaussian random field. The proposed method has the potential to enable prediction of early treatment response and shows promise for application to clinical practice. In future work, the authors will report on the robustness of the estimation theory for application to clinical practice of therapy response evaluation, which pertains to binary discrimination tasks at a fixed location in the image such as detection of small and weak lesion.« less
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.
Electromagnetic mixing laws: A supersymmetric approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niez, J.J.
2010-02-15
In this article we address the old problem of finding the effective dielectric constant of materials described either by a local random dielectric constant, or by a set of non-overlapping spherical inclusions randomly dispersed in a host. We use a unified theoretical framework, such that all the most important Electromagnetic Mixing Laws (EML) can be recovered as the first iterative step of a family of results, thus opening the way to future improvements through the refinements of the approximation schemes. When the material is described by a set of immersed inclusions characterized by their spatial correlation functions, we exhibit anmore » EML which, being featured by a minimal approximation scheme, does not come from the multiple scattering paradigm. It is made of a pure Hori-Yonezawa formula, corrected by a power series of the inclusion density. The coefficients of the latter, which are given as sums of standard diagrams, are recast into electromagnetic quantities which calculation is amenable numerically thanks to codes available on the web. The methods used and developed in this work are generic and can be used in a large variety of areas ranging from mechanics to thermodynamics.« less
Ghodrati, Sajjad; Kandi, Saeideh Gorji; Mohseni, Mohsen
2018-06-01
In recent years, various surface roughness measurement methods have been proposed as alternatives to the commonly used stylus profilometry, which is a low-speed, destructive, expensive but precise method. In this study, a novel method, called "image profilometry," has been introduced for nondestructive, fast, and low-cost surface roughness measurement of randomly rough metallic samples based on image processing and machine vision. The impacts of influential parameters such as image resolution and filtering approach for elimination of the long wavelength surface undulations on the accuracy of the image profilometry results have been comprehensively investigated. Ten surface roughness parameters were measured for the samples using both the stylus and image profilometry. Based on the results, the best image resolution was 800 dpi, and the most practical filtering method was Gaussian convolution+cutoff. In these conditions, the best and worst correlation coefficients (R 2 ) between the stylus and image profilometry results were 0.9892 and 0.9313, respectively. Our results indicated that the image profilometry predicted the stylus profilometry results with high accuracy. Consequently, it could be a viable alternative to the stylus profilometry, particularly in online applications.
Genetic diversity of popcorn genotypes using molecular analysis.
Resh, F S; Scapim, C A; Mangolin, C A; Machado, M F P S; do Amaral, A T; Ramos, H C C; Vivas, M
2015-08-19
In this study, we analyzed dominant molecular markers to estimate the genetic divergence of 26 popcorn genotypes and evaluate whether using various dissimilarity coefficients with these dominant markers influences the results of cluster analysis. Fifteen random amplification of polymorphic DNA primers produced 157 amplified fragments, of which 65 were monomorphic and 92 were polymorphic. To calculate the genetic distances among the 26 genotypes, the complements of the Jaccard, Dice, and Rogers and Tanimoto similarity coefficients were used. A matrix of Dij values (dissimilarity matrix) was constructed, from which the genetic distances among genotypes were represented in a more simplified manner as a dendrogram generated using the unweighted pair-group method with arithmetic average. Clusters determined by molecular analysis generally did not group material from the same parental origin together. The largest genetic distance was between varieties 17 (UNB-2) and 18 (PA-091). In the identification of genotypes with the smallest genetic distance, the 3 coefficients showed no agreement. The 3 dissimilarity coefficients showed no major differences among their grouping patterns because agreement in determining the genotypes with large, medium, and small genetic distances was high. The largest genetic distances were observed for the Rogers and Tanimoto dissimilarity coefficient (0.74), followed by the Jaccard coefficient (0.65) and the Dice coefficient (0.48). The 3 coefficients showed similar estimations for the cophenetic correlation coefficient. Correlations among the matrices generated using the 3 coefficients were positive and had high magnitudes, reflecting strong agreement among the results obtained using the 3 evaluated dissimilarity coefficients.
Asymptotic Solutions for Optical Properties of Large Particles with Strong Absorption
NASA Technical Reports Server (NTRS)
Yang, Ping; Gao, Bo-Cai; Baum, Bryan A.; Hu, Yong X.; Wiscombe, Warren J.; Mishchenko, Michael I.; Winker, Dave M.; Nasiri, Shaima L.; Einaudi, Franco (Technical Monitor)
2000-01-01
For scattering calculations involving nonspherical particles such as ice crystals, we show that the transverse wave condition is not applicable to the refracted electromagnetic wave in the context of geometric optics when absorption is involved. Either the TM wave condition (i.e., where the magnetic field of the refracted wave is transverse with respect to the wave direction) or the TE wave condition (i.e., where the electric field is transverse with respect to the propagating direction of the wave) may be assumed for the refracted wave in an absorbing medium to locally satisfy the electromagnetic boundary condition in the ray tracing calculation. The wave mode assumed for the refracted wave affects both the reflection and refraction coefficients. As a result, a nonunique solution for these coefficients is derived from the electromagnetic boundary condition. In this study we have identified the appropriate solution for the Fresnel reflection/refraction coefficients in light scattering calculation based on the ray tracing technique. We present the 3 x 2 refraction or transmission matrix that completely accounts for the inhomogeneity of the refracted wave in an absorbing medium. Using the Fresnel coefficients for an absorbing medium, we derive an asymptotic solution in an analytical format for the scattering properties of a general polyhedral particle. Numerical results are presented for hexagonal plates and columns with both preferred and random orientations. The asymptotic theory can produce reasonable accuracy in the phase function calculations in the infrared window region (wavelengths near 10 micron) if the particle size (in diameter) is on the order of 40 micron or larger. However, since strong absorption is assumed in the computation of the single-scattering albedo in the asymptotic theory, the single scattering albedo does not change with variation of the particle size. As a result, the asymptotic theory can lead to substantial errors in the computation of single-scattering albedo for small and moderate particle sizes. However, from comparison of the asymptotic results with the FDTD solution, it is expected that a convergence between the FDTD results and the asymptotic theory results can be reached when the particle size approaches 200 micron. We show that the phase function at side-scattering and backscattering angles is insensitive to particle shape if the random orientation condition is assumed. However, if preferred orientations are assumed for particles, the phase function has a strong dependence on scattering azimuthal angle. The single-scattering albedo also shows very strong dependence on the inclination angle of incident radiation with respect to the rotating axis for the preferred particle orientations.
ERIC Educational Resources Information Center
Weber, Deborah A.
Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…
Davies, H T; Leslie, G; Pereira, S M; Webb, S A R
2008-03-01
To determine if circuit life is influenced by a higher pre-dilution volume used in CVVH when compared with a lower pre-dilution volume approach in CVVHDF. A comparative crossover study. Cases were randomized to receive either CVVH or CVVHDF followed by the alternative treatment. All patients >or= 18 yrs of age who required CRRT while in ICU were eligible to participate, but excluded if coagulopathic, thrombocytopenic or unable to receive heparin. Based on an intention-to-treat, 45 patients were randomized to receive either CVVH or CVVHDF followed by the alternative treatment. A 24-bed, tertiary, medical and surgical adult intensive care unit (ICU). Blood flow rate, vascular access device and insertion site, hemofilter, anticoagulation and machine hardware were standardized. An ultrafiltrate dose of 35 ml/ kg/h delivered pre-filter was used for CVVH. A fixed pre-dilution volume of 600 mls/h with a dialysate dose of 1 L was used for CVVHDF. Thirty-one patients received CVVH or CVVHDF out of 45 participants followed by the alternative technique. There was a significant increase in circuit life in favor of CVVHDF (median=16 h 5 min, range=40 h 23 min) compared with CVVH (median=6 h 35 min, range=30 h 45 min). A Mann-Whitney U test was performed to compare circuit life between the two different CRRT modes (Z=-3.478, p<0.001). Measurements of circuit life on the 93 circuits which survived to clotting (50 CVVH and 43 CVVHDF) were log transformed prior to under taking a standard multiple regression analysis. None of the independent variables - activated prothrombin time (aPTT), platelet count, heparin dose, patient hematocrit or urea - had a coefficient partial correlation >0.09 (coefficient of the determination=0.117) or a linear relationship which could be associated with circuit life (p=0.228). Pre-diluted CVVHDF appeared to have a longer circuit life when compared to high volume pre-diluted CVVH. The choice of CRRT mode may be an important independent determinant of circuit life.
Passive microwave remote sensing of an anisotropic random-medium layer
NASA Technical Reports Server (NTRS)
Lee, J. K.; Kong, J. A.
1985-01-01
The principle of reciprocity is invoked to calculate the brightness temperatures for passive microwave remote sensing of a two-layer anisotropic random medium. The bistatic scattering coefficients are first computed with the Born approximation and then integrated over the upper hemisphere to be subtracted from unity, in order to obtain the emissivity for the random-medium layer. The theoretical results are illustrated by plotting the emissivities as functions of viewing angles and polarizations. They are used to interpret remote sgnsing data obtained from vegetation canopy where the anisotropic random-medium model applies. Field measurements with corn stalks arranged in various configurations with preferred azimuthal directions are successfully interpreted with this model.
Probability of stress-corrosion fracture under random loading.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
A method is developed for predicting the probability of stress-corrosion fracture of structures under random loadings. The formulation is based on the cumulative damage hypothesis and the experimentally determined stress-corrosion characteristics. Under both stationary and nonstationary random loadings, the mean value and the variance of the cumulative damage are obtained. The probability of stress-corrosion fracture is then evaluated using the principle of maximum entropy. It is shown that, under stationary random loadings, the standard deviation of the cumulative damage increases in proportion to the square root of time, while the coefficient of variation (dispersion) decreases in inversed proportion to the square root of time. Numerical examples are worked out to illustrate the general results.
Diagnosing cysts with correlation coefficient images from 2-dimensional freehand elastography.
Booi, Rebecca C; Carson, Paul L; O'Donnell, Matthew; Richards, Michael S; Rubin, Jonathan M
2007-09-01
We compared the diagnostic potential of using correlation coefficient images versus elastograms from 2-dimensional (2D) freehand elastography to characterize breast cysts. In this preliminary study, which was approved by the Institutional Review Board and compliant with the Health Insurance Portability and Accountability Act, we imaged 4 consecutive human subjects (4 cysts, 1 biopsy-verified benign breast parenchyma) with freehand 2D elastography. Data were processed offline with conventional 2D phase-sensitive speckle-tracking algorithms. The correlation coefficient in the cyst and surrounding tissue was calculated, and appearances of the cysts in the correlation coefficient images and elastograms were compared. The correlation coefficient in the cysts was considerably lower (14%-37%) than in the surrounding tissue because of the lack of sufficient speckle in the cysts, as well as the prominence of random noise, reverberations, and clutter, which decorrelated quickly. Thus, the cysts were visible in all correlation coefficient images. In contrast, the elastograms associated with these cysts each had different elastographic patterns. The solid mass in this study did not have the same high decorrelation rate as the cysts, having a correlation coefficient only 2.1% lower than that of surrounding tissue. Correlation coefficient images may produce a more direct, reliable, and consistent method for characterizing cysts than elastograms.
Stochastic dynamic analysis of marine risers considering Gaussian system uncertainties
NASA Astrophysics Data System (ADS)
Ni, Pinghe; Li, Jun; Hao, Hong; Xia, Yong
2018-03-01
This paper performs the stochastic dynamic response analysis of marine risers with material uncertainties, i.e. in the mass density and elastic modulus, by using Stochastic Finite Element Method (SFEM) and model reduction technique. These uncertainties are assumed having Gaussian distributions. The random mass density and elastic modulus are represented by using the Karhunen-Loève (KL) expansion. The Polynomial Chaos (PC) expansion is adopted to represent the vibration response because the covariance of the output is unknown. Model reduction based on the Iterated Improved Reduced System (IIRS) technique is applied to eliminate the PC coefficients of the slave degrees of freedom to reduce the dimension of the stochastic system. Monte Carlo Simulation (MCS) is conducted to obtain the reference response statistics. Two numerical examples are studied in this paper. The response statistics from the proposed approach are compared with those from MCS. It is noted that the computational time is significantly reduced while the accuracy is kept. The results demonstrate the efficiency of the proposed approach for stochastic dynamic response analysis of marine risers.
Caustics, counting maps and semi-classical asymptotics
NASA Astrophysics Data System (ADS)
Ercolani, N. M.
2011-02-01
This paper develops a deeper understanding of the structure and combinatorial significance of the partition function for Hermitian random matrices. The coefficients of the large N expansion of the logarithm of this partition function, also known as the genus expansion (and its derivatives), are generating functions for a variety of graphical enumeration problems. The main results are to prove that these generating functions are, in fact, specific rational functions of a distinguished irrational (algebraic) function, z0(t). This distinguished function is itself the generating function for the Catalan numbers (or generalized Catalan numbers, depending on the choice of weight of the parameter t). It is also a solution of the inviscid Burgers equation for certain initial data. The shock formation, or caustic, of the Burgers characteristic solution is directly related to the poles of the rational forms of the generating functions. As an intriguing application, one gains new insights into the relation between certain derivatives of the genus expansion, in a double-scaling limit, and the asymptotic expansion of the first Painlevé transcendent. This provides a precise expression of the Painlevé asymptotic coefficients directly in terms of the coefficients of the partial fractions expansion of the rational form of the generating functions established in this paper. Moreover, these insights point towards a more general program relating the first Painlevé hierarchy to the higher order structure of the double-scaling limit through the specific rational structure of generating functions in the genus expansion. The paper closes with a discussion of the relation of this work to recent developments in understanding the asymptotics of graphical enumeration. As a by-product, these results also yield new information about the asymptotics of recurrence coefficients for orthogonal polynomials with respect to exponential weights, the calculation of correlation functions for certain tied random walks on a 1D lattice, and the large time asymptotics of random matrix partition functions.
NASA Astrophysics Data System (ADS)
Han, Hao; Zhang, Hao; Wei, Xinzhou; Moore, William; Liang, Zhengrong
2016-03-01
In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis. We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.
Balagué, Natàlia; González, Jacob; Javierre, Casimiro; Hristovski, Robert; Aragonés, Daniel; Álamo, Juan; Niño, Oscar; Ventura, Josep L.
2016-01-01
Our purpose was to study the effects of different training modalities and detraining on cardiorespiratory coordination (CRC). Thirty-two young males were randomly assigned to four training groups: aerobic (AT), resistance (RT), aerobic plus resistance (AT + RT), and control (C). They were assessed before training, after training (6 weeks) and after detraining (3 weeks) by means of a graded maximal test. A principal component (PC) analysis of selected cardiovascular and cardiorespiratory variables was performed to evaluate CRC. The first PC (PC1) coefficient of congruence in the three conditions (before training, after training and after detraining) was compared between groups. Two PCs were identified in 81% of participants before the training period. After this period the number of PCs and the projection of the selected variables onto them changed only in the groups subject to a training programme. The PC1 coefficient of congruence was significantly lower in the training groups compared with the C group [H(3, N=32) = 11.28; p = 0.01]. In conclusion, training produced changes in CRC, reflected by the change in the number of PCs and the congruence values of PC1. These changes may be more sensitive than the usually explored cardiorespiratory reserve, and they probably precede it. PMID:26903884
Modelling Furrow Irrigation-Induced Erosion on a Sandy Loam Soil in Samaru, Northern Nigeria
Dibal, Jibrin M.; Igbadun, H. E.; Ramalan, A. A.; Mudiare, O. J.
2014-01-01
Assessment of soil erosion and sediment yield in furrow irrigation is limited in Samaru-Zaria. Data was collected in 2009 and 2010 and was used to develop a dimensionless model for predicting furrow irrigation-induced erosion (FIIE) using the dimensional analyses approach considering stream size, furrow length, furrow width, soil infiltration rate, hydraulic shear stress, soil erodibility, and time flow of water in the furrows as the building components. One liter of water-sediment samples was collected from the furrows during irrigations from which sediment concentrations and soil erosion per furrow were calculated. Stream sizes Q (2.5, 1.5, and 0.5 l/s), furrow lengths X (90 and 45 m), and furrow widths W (0.75 and 0.9 m) constituted the experimental factors randomized in a split plot design with four replications. Water flow into and out of the furrows was measured using cutthroat flumes. The model produced reasonable predictions relative to field measurements with coefficient of determination R 2 in the neighborhood of 0.8, model prediction efficiency NSE (0.7000), high index of agreement (0.9408), and low coefficient of variability (0.4121). The model is most sensitive to water stream size. The variables in the model are easily measurable; this makes it better and easily adoptable. PMID:27471748
Li, Min; Qi, Tao; Bernabé, Yves; Zhao, Jinzhou; Wang, Ying; Wang, Dong; Wang, Zheming
2018-02-28
We used a time domain random walk approach to simulate passive solute transport in networks. In individual pores, solute transport was modeled as a combination of Poiseuille flow and Taylor dispersion. The solute plume data were interpreted via the method of moments. Analysis of the first and second moments showed that the longitudinal dispersivity increased with increasing coefficient of variation of the pore radii CV and decreasing pore coordination number Z. The third moment was negative and its magnitude grew linearly with time, meaning that the simulated dispersion was intrinsically non-Fickian. The statistics of the Eulerian mean fluid velocities [Formula: see text], the Taylor dispersion coefficients [Formula: see text] and the transit times [Formula: see text] were very complex and strongly affected by CV and Z. In particular, the probability of occurrence of negative velocities grew with increasing CV and decreasing Z. Hence, backward and forward transit times had to be distinguished. The high-τ branch of the transit-times probability curves had a power law form associated to non-Fickian behavior. However, the exponent was insensitive to pore connectivity, although variations of Z affected the third moment growth. Thus, we conclude that both the low- and high-τ branches played a role in generating the observed non-Fickian behavior.
Beaufrère, Hugues; Pariaut, Romain; Rodriguez, Daniel; Nevarez, Javier G; Tully, Thomas N
2012-10-01
To assess the agreement and reliability of cardiac measurements obtained with 3 echocardiographic techniques in anesthetized red-tailed hawks (Buteo jamaicensis). 10 red-tailed hawks. Transcoelomic, contrast transcoelomic, and transesophageal echocardiographic evaluations of the hawks were performed, and cineloops of imaging planes were recorded. Three observers performed echocardiographic measurements of cardiac variables 3 times on 3 days. The order in which hawks were assessed and echocardiographic techniques were used was randomized. Results were analyzed with linear mixed modeling, agreement was assessed with intraclass correlation coefficients, and variation was estimated with coefficients of variation. Significant differences were evident among the 3 echocardiographic methods for most measurements, and the agreement among findings was generally low. Interobserver agreement was generally low to medium. Intraobserver agreement was generally medium to high. Overall, better agreement was achieved for the left ventricular measurements and for the transesophageal approach than for other measurements and techniques. Echocardiographic measurements in hawks were not reliable, except when the left ventricle was measured by the same observer. Furthermore, cardiac morphometric measurements may not be clinically important. When measurements are required, one needs to consider that follow-up measurements should be performed by the same echocardiographer and should show at least a 20% difference from initial measurements to be confident that any difference is genuine.
NASA Astrophysics Data System (ADS)
Veselovskii, I.; Goloub, P.; Podvin, T.; Tanre, D.; Ansmann, A.; Korenskiy, M.; Borovoi, A.; Hu, Q.; Whiteman, D. N.
2017-11-01
The existing models predict that corner reflection (CR) of laser radiation by simple ice crystals of perfect shape, such as hexagonal columns or plates, can provide a significant contribution to the ice cloud backscattering. However in real clouds the CR effect may be suppressed due to crystal deformation and surface roughness. In contrast to the extinction coefficient, which is spectrally independent, consideration of diffraction associated with CR results in a spectral dependence of the backscattering coefficient. Thus measuring the spectral dependence of the cloud backscattering coefficient, the contribution of CR can be identified. The paper presents the results of profiling of backscattering coefficient (β) and particle depolarization ratio (δ) of ice and mixed-phase clouds over West Africa by means of a two-wavelength polarization Mie-Raman lidar operated at 355 nm and 532 nm during the SHADOW field campaign. The lidar observations were performed at a slant angle of 43 degree off zenith, thus CR from both randomly oriented crystals and oriented plates could be analyzed. For the most of the observations the cloud backscatter color ratio β355/β532 was close to 1.0, and no spectral features that might indicate the presence of CR of randomly oriented crystals were revealed. Still, in two measurement sessions we observed an increase of backscatter color ratio to a value of nearly 1.3 simultaneously with a decrease of the spectral depolarization ratio δ355/δ532 ratio from 1.0 to 0.8 inside the layers containing precipitating ice crystals. We attribute these changes in optical properties to corner reflections by horizontally oriented ice plates.
Switching theory-based steganographic system for JPEG images
NASA Astrophysics Data System (ADS)
Cherukuri, Ravindranath C.; Agaian, Sos S.
2007-04-01
Cellular communications constitute a significant portion of the global telecommunications market. Therefore, the need for secured communication over a mobile platform has increased exponentially. Steganography is an art of hiding critical data into an innocuous signal, which provide answers to the above needs. The JPEG is one of commonly used format for storing and transmitting images on the web. In addition, the pictures captured using mobile cameras are in mostly in JPEG format. In this article, we introduce a switching theory based steganographic system for JPEG images which is applicable for mobile and computer platforms. The proposed algorithm uses the fact that energy distribution among the quantized AC coefficients varies from block to block and coefficient to coefficient. Existing approaches are effective with a part of these coefficients but when employed over all the coefficients they show there ineffectiveness. Therefore, we propose an approach that works each set of AC coefficients with different frame work thus enhancing the performance of the approach. The proposed system offers a high capacity and embedding efficiency simultaneously withstanding to simple statistical attacks. In addition, the embedded information could be retrieved without prior knowledge of the cover image. Based on simulation results, the proposed method demonstrates an improved embedding capacity over existing algorithms while maintaining a high embedding efficiency and preserving the statistics of the JPEG image after hiding information.
General Series Solutions for Stresses and Displacements in an Inner-fixed Ring
NASA Astrophysics Data System (ADS)
Jiao, Yongshu; Liu, Shuo; Qi, Dexuan
2018-03-01
The general series solution approach is provided to get the stress and displacement fields in the inner-fixed ring. After choosing an Airy stress function in series form, stresses are expressed by infinite coefficients. Displacements are obtained by integrating the geometric equations. For an inner-fixed ring, the arbitrary loads acting on outer edge are extended into two sets of Fourier series. The zero displacement boundary conditions on inner surface are utilized. Then the stress (and displacement) coefficients are expressed by loading coefficients. A numerical example shows the validity of this approach.
Semi-automatic aircraft control system
NASA Technical Reports Server (NTRS)
Gilson, Richard D. (Inventor)
1978-01-01
A flight control type system which provides a tactile readout to the hand of a pilot for directing elevator control during both approach to flare-out and departure maneuvers. For altitudes above flare-out, the system sums the instantaneous coefficient of lift signals of a lift transducer with a generated signal representing ideal coefficient of lift for approach to flare-out, i.e., a value of about 30% below stall. Error signals resulting from the summation are read out by the noted tactile device. Below flare altitude, an altitude responsive variation is summed with the signal representing ideal coefficient of lift to provide error signal readout.
NASA Astrophysics Data System (ADS)
Gryanik, Vladimir M.; Lüpkes, Christof
2018-02-01
In climate and weather prediction models the near-surface turbulent fluxes of heat and momentum and related transfer coefficients are usually parametrized on the basis of Monin-Obukhov similarity theory (MOST). To avoid iteration, required for the numerical solution of the MOST equations, many models apply parametrizations of the transfer coefficients based on an approach relating these coefficients to the bulk Richardson number Rib. However, the parametrizations that are presently used in most climate models are valid only for weaker stability and larger surface roughnesses than those documented during the Surface Heat Budget of the Arctic Ocean campaign (SHEBA). The latter delivered a well-accepted set of turbulence data in the stable surface layer over polar sea-ice. Using stability functions based on the SHEBA data, we solve the MOST equations applying a new semi-analytic approach that results in transfer coefficients as a function of Rib and roughness lengths for momentum and heat. It is shown that the new coefficients reproduce the coefficients obtained by the numerical iterative method with a good accuracy in the most relevant range of stability and roughness lengths. For small Rib, the new bulk transfer coefficients are similar to the traditional coefficients, but for large Rib they are much smaller than currently used coefficients. Finally, a possible adjustment of the latter and the implementation of the new proposed parametrizations in models are discussed.
Investment portfolio of a pension fund: Stochastic model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosch-Princep, M.; Fontanals-Albiol, H.
1994-12-31
This paper presents a stochastic programming model that aims at getting the optimal investment portfolio of a Pension Funds. The model has been designed bearing in mind the liabilities of the Funds to its members. The essential characteristic of the objective function and the constraints is the randomness of the coefficients and the right hand side of the constraints, so it`s necessary to use techniques of stochastic mathematical programming to get information about the amount of money that should be assigned to each sort of investment. It`s important to know the risky attitude of the person that has to takemore » decisions towards running risks. It incorporates the relation between the different coefficients of the objective function and constraints of each period of temporal horizon, through lineal and discrete random processes. Likewise, it includes the hypotheses that are related to Spanish law concerning the subject of Pension Funds.« less
ERIC Educational Resources Information Center
Camporesi, Roberto
2016-01-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as…
Liao, Fuyuan; Jan, Yih-Kuen
2012-06-01
This paper presents a recurrence network approach for the analysis of skin blood flow dynamics in response to loading pressure. Recurrence is a fundamental property of many dynamical systems, which can be explored in phase spaces constructed from observational time series. A visualization tool of recurrence analysis called recurrence plot (RP) has been proved to be highly effective to detect transitions in the dynamics of the system. However, it was found that delay embedding can produce spurious structures in RPs. Network-based concepts have been applied for the analysis of nonlinear time series recently. We demonstrate that time series with different types of dynamics exhibit distinct global clustering coefficients and distributions of local clustering coefficients and that the global clustering coefficient is robust to the embedding parameters. We applied the approach to study skin blood flow oscillations (BFO) response to loading pressure. The results showed that global clustering coefficients of BFO significantly decreased in response to loading pressure (p<0.01). Moreover, surrogate tests indicated that such a decrease was associated with a loss of nonlinearity of BFO. Our results suggest that the recurrence network approach can practically quantify the nonlinear dynamics of BFO.
An indirect approach to the extensive calculation of relationship coefficients
Colleau, Jean-Jacques
2002-01-01
A method was described for calculating population statistics on relationship coefficients without using corresponding individual data. It relied on the structure of the inverse of the numerator relationship matrix between individuals under investigation and ancestors. Computation times were observed on simulated populations and were compared to those incurred with a conventional direct approach. The indirect approach turned out to be very efficient for multiplying the relationship matrix corresponding to planned matings (full design) by any vector. Efficiency was generally still good or very good for calculating statistics on these simulated populations. An extreme implementation of the method is the calculation of inbreeding coefficients themselves. Relative performances of the indirect method were good except when many full-sibs during many generations existed in the population. PMID:12270102
Interpreting Regression Results: beta Weights and Structure Coefficients are Both Important.
ERIC Educational Resources Information Center
Thompson, Bruce
Various realizations have led to less frequent use of the "OVA" methods (analysis of variance--ANOVA--among others) and to more frequent use of general linear model approaches such as regression. However, too few researchers understand all the various coefficients produced in regression. This paper explains these coefficients and their…
Bayesian Meta-Analysis of Cronbach's Coefficient Alpha to Evaluate Informative Hypotheses
ERIC Educational Resources Information Center
Okada, Kensuke
2015-01-01
This paper proposes a new method to evaluate informative hypotheses for meta-analysis of Cronbach's coefficient alpha using a Bayesian approach. The coefficient alpha is one of the most widely used reliability indices. In meta-analyses of reliability, researchers typically form specific informative hypotheses beforehand, such as "alpha of…
Scavenging and recombination kinetics in a radiation spur: The successive ordered scavenging events
NASA Astrophysics Data System (ADS)
Al-Samra, Eyad H.; Green, Nicholas J. B.
2018-03-01
This study describes stochastic models to investigate the successive ordered scavenging events in a spur of four radicals, a model system based on a radiation spur. Three simulation models have been developed to obtain the probabilities of the ordered scavenging events: (i) a Monte Carlo random flight (RF) model, (ii) hybrid simulations in which the reaction rate coefficient is used to generate scavenging times for the radicals and (iii) the independent reaction times (IRT) method. The results of these simulations are found to be in agreement with one another. In addition, a detailed master equation treatment is also presented, and used to extract simulated rate coefficients of the ordered scavenging reactions from the RF simulations. These rate coefficients are transient, the rate coefficients obtained for subsequent reactions are effectively equal, and in reasonable agreement with the simple correction for competition effects that has recently been proposed.
Modal sound transmission loss of a single leaf panel: Asymptotic solutions.
Wang, Chong
2015-12-01
In a previously published paper [C. Wang, J. Acoust. Soc. Am. 137(6), 3514-3522 (2015)], the modal sound transmission coefficients of a single leaf panel were discussed with regard to the inter-modal coupling effects. By incorporating such effect into the equivalent modal radiation impedance, which is directly related to the modal sound transmission coefficient of each mode, the overall sound transmission loss for both normal and randomized sound incidences was computed through a simple modal superposition. Benefiting from the analytical expressions of the equivalent modal impedance and modal transmission coefficients, in this paper, behaviors of modal sound transmission coefficients in several typical frequency ranges are discussed in detail. Asymptotic solutions are also given for the panels with relatively low bending stiffnesses, for which the sound transmission loss has been assumed to follow the mass law of a limp panel. Results are also compared to numerical analysis and the renowned mass law theories.
Plassmann, Merle M; Tengstrand, Erik; Åberg, K Magnus; Benskin, Jonathan P
2016-06-01
Non-targeted mass spectrometry-based approaches for detecting novel xenobiotics in biological samples are hampered by the occurrence of naturally fluctuating endogenous substances, which are difficult to distinguish from environmental contaminants. Here, we investigate a data reduction strategy for datasets derived from a biological time series. The objective is to flag reoccurring peaks in the time series based on increasing peak intensities, thereby reducing peak lists to only those which may be associated with emerging bioaccumulative contaminants. As a result, compounds with increasing concentrations are flagged while compounds displaying random, decreasing, or steady-state time trends are removed. As an initial proof of concept, we created artificial time trends by fortifying human whole blood samples with isotopically labelled standards. Different scenarios were investigated: eight model compounds had a continuously increasing trend in the last two to nine time points, and four model compounds had a trend that reached steady state after an initial increase. Each time series was investigated at three fortification levels and one unfortified series. Following extraction, analysis by ultra performance liquid chromatography high-resolution mass spectrometry, and data processing, a total of 21,700 aligned peaks were obtained. Peaks displaying an increasing trend were filtered from randomly fluctuating peaks using time trend ratios and Spearman's rank correlation coefficients. The first approach was successful in flagging model compounds spiked at only two to three time points, while the latter approach resulted in all model compounds ranking in the top 11 % of the peak lists. Compared to initial peak lists, a combination of both approaches reduced the size of datasets by 80-85 %. Overall, non-target time trend screening represents a promising data reduction strategy for identifying emerging bioaccumulative contaminants in biological samples. Graphical abstract Using time trends to filter out emerging contaminants from large peak lists.
Nedea, S V; van Steenhoven, A A; Markvoort, A J; Spijker, P; Giordano, D
2014-05-01
The influence of gas-surface interactions of a dilute gas confined between two parallel walls on the heat flux predictions is investigated using a combined Monte Carlo (MC) and molecular dynamics (MD) approach. The accommodation coefficients are computed from the temperature of incident and reflected molecules in molecular dynamics and used as effective coefficients in Maxwell-like boundary conditions in Monte Carlo simulations. Hydrophobic and hydrophilic wall interactions are studied, and the effect of the gas-surface interaction potential on the heat flux and other characteristic parameters like density and temperature is shown. The heat flux dependence on the accommodation coefficient is shown for different fluid-wall mass ratios. We find that the accommodation coefficient is increasing considerably when the mass ratio is decreased. An effective map of the heat flux depending on the accommodation coefficient is given and we show that MC heat flux predictions using Maxwell boundary conditions based on the accommodation coefficient give good results when compared to pure molecular dynamics heat predictions. The accommodation coefficients computed for a dilute gas for different gas-wall interaction parameters and mass ratios are transferred to compute the heat flux predictions for a dense gas. Comparison of the heat fluxes derived using explicit MD, MC with Maxwell-like boundary conditions based on the accommodation coefficients, and pure Maxwell boundary conditions are discussed. A map of the heat flux dependence on the accommodation coefficients for a dense gas, and the effective accommodation coefficients for different gas-wall interactions are given. In the end, this approach is applied to study the gas-surface interactions of argon and xenon molecules on a platinum surface. The derived accommodation coefficients are compared with values of experimental results.
Transfer coefficients in ultracold strongly coupled plasma
NASA Astrophysics Data System (ADS)
Bobrov, A. A.; Vorob'ev, V. S.; Zelener, B. V.
2018-03-01
We use both analytical and molecular dynamic methods for electron transfer coefficients in an ultracold plasma when its temperature is small and the coupling parameter characterizing the interaction of electrons and ions exceeds unity. For these conditions, we use the approach of nearest neighbor to determine the average electron (ion) diffusion coefficient and to calculate other electron transfer coefficients (viscosity and electrical and thermal conductivities). Molecular dynamics simulations produce electronic and ionic diffusion coefficients, confirming the reliability of these results. The results compare favorably with experimental and numerical data from earlier studies.
Liu, Huihua; Wang, Bo; Barrow, Colin J; Adhikari, Benu
2014-01-15
The objectives of this study were to quantify the relationship between secondary structure of gelatin and its adsorption at the fish-oil/water interface and to quantify the implication of the adsorption on the dynamic interfacial tension (DST) and emulsion stability. The surface hydrophobicity of the gelatin solutions decreased when the pH increased from 4.0 to 6.0, while opposite tend was observed in the viscosity of the solution. The DST values decreased as the pH increased from 4.0 to 6.0, indicating that higher positive charges (measured trough zeta potential) in the gelatin solution tended to result in higher DST values. The adsorption kinetics of the gelatin solution was examined through the calculated diffusion coefficients (Deff). The addition of acid promoted the random coil and β-turn structures at the expense of α-helical structure. The addition of NaOH decreased the β-turn and increased the α-helix and random coil. The decrease in the random coil and triple helix structures in the gelatin solution resulted into increased Deff values. The highest diffusion coefficients, the highest emulsion stability and the lowest amount of random coil and triple helix structures were observed at pH=4.8. The lowest amount of random coil and triple helix structures in the interfacial protein layer correlated with the highest stability of the emulsion (highest ESI value). The lower amount of random coil and triple helix structures allowed higher coverage of the oil-water interface by relatively highly ordered secondary structure of gelatin. Copyright © 2013 Elsevier Ltd. All rights reserved.
Valenzuela, Carlos Y
2013-01-01
The Neutral Theory of Evolution (NTE) proposes mutation and random genetic drift as the most important evolutionary factors. The most conspicuous feature of evolution is the genomic stability during paleontological eras and lack of variation among taxa; 98% or more of nucleotide sites are monomorphic within a species. NTE explains this homology by random fixation of neutral bases and negative selection (purifying selection) that does not contribute either to evolution or polymorphisms. Purifying selection is insufficient to account for this evolutionary feature and the Nearly-Neutral Theory of Evolution (N-NTE) included negative selection with coefficients as low as mutation rate. These NTE and N-NTE propositions are thermodynamically (tendency to random distributions, second law), biotically (recurrent mutation), logically and mathematically (resilient equilibria instead of fixation by drift) untenable. Recurrent forward and backward mutation and random fluctuations of base frequencies alone in a site make life organization and fixations impossible. Drift is not a directional evolutionary factor, but a directional tendency of matter-energy processes (second law) which threatens the biotic organization. Drift cannot drive evolution. In a site, the mutation rates among bases and selection coefficients determine the resilient equilibrium frequency of bases that genetic drift cannot change. The expected neutral random interaction among nucleotides is zero; however, huge interactions and periodicities were found between bases of dinucleotides separated by 1, 2... and more than 1,000 sites. Every base is co-adapted with the whole genome. Neutralists found that neutral evolution is independent of population size (N); thus neutral evolution should be independent of drift, because drift effect is dependent upon N. Also, chromosome size and shape as well as protein size are far from random.
Texture formation in FePt thin films via thermal stress management
NASA Astrophysics Data System (ADS)
Rasmussen, P.; Rui, X.; Shield, J. E.
2005-05-01
The transformation variant of the fcc to fct transformation in FePt thin films was tailored by controlling the stresses in the thin films, thereby allowing selection of in- or out-of-plane c-axis orientation. FePt thin films were deposited at ambient temperature on several substrates with differing coefficients of thermal expansion relative to the FePt, which generated thermal stresses during the ordering heat treatment. X-ray diffraction analysis revealed preferential out-of-plane c-axis orientation for FePt films deposited on substrates with a similar coefficients of thermal expansion, and random orientation for FePt films deposited on substrates with a very low coefficient of thermal expansion, which is consistent with theoretical analysis when considering residual stresses.
Modeling Secondary Organic Aerosols over Europe: Impact of Activity Coefficients and Viscosity
NASA Astrophysics Data System (ADS)
Kim, Y.; Sartelet, K.; Couvidat, F.
2014-12-01
Semi-volatile organic species (SVOC) can condense on suspended particulate materials (PM) in the atmosphere. The modeling of condensation/evaporation of SVOC often assumes that gas-phase and particle-phase concentrations are at equilibrium. However, recent studies show that secondary organic aerosols (SOA) may not be accurately represented by an equilibrium approach between the gas and particle phases, because organic aerosols in the particle phase may be very viscous. The condensation in the viscous liquid phase is limited by the diffusion from the surface of PM to its core. Using a surrogate approach to represent SVOC, depending on the user's choice, the secondary organic aerosol processor (SOAP) may assume equilibrium or model dynamically the condensation/evaporation between the gas and particle phases to take into account the viscosity of organic aerosols. The model is implemented in the three-dimensional chemistry-transport model of POLYPHEMUS. In SOAP, activity coefficients for organic mixtures can be computed using UNIFAC for short-range interactions between molecules and AIOMFAC to also take into account the effect of inorganic species on activity coefficients. Simulations over Europe are performed and POLYPHEMUS/SOAP is compared to POLYPHEMUS/H2O, which was previously used to model SOA using the equilibrium approach with activity coefficients from UNIFAC. Impacts of the dynamic approach on modeling SOA over Europe are evaluated. The concentrations of SOA using the dynamic approach are compared with those using the equilibrium approach. The increase of computational cost is also evaluated.
NASA Technical Reports Server (NTRS)
Englert, G. W.
1971-01-01
A model of the random walk is formulated to allow a simple computing procedure to replace the difficult problem of solution of the Fokker-Planck equation. The step sizes and probabilities of taking steps in the various directions are expressed in terms of Fokker-Planck coefficients. Application is made to many particle systems with Coulomb interactions. The relaxation of a highly peaked velocity distribution of particles to equilibrium conditions is illustrated.
NASA Astrophysics Data System (ADS)
Wang, Rong; Wang, Li; Yang, Yong; Li, Jiajia; Wu, Ying; Lin, Pan
2016-11-01
Attention deficit hyperactivity disorder (ADHD) is the most common childhood neuropsychiatric disorder and affects approximately 6 -7 % of children worldwide. Here, we investigate the statistical properties of undirected and directed brain functional networks in ADHD patients based on random matrix theory (RMT), in which the undirected functional connectivity is constructed based on correlation coefficient and the directed functional connectivity is measured based on cross-correlation coefficient and mutual information. We first analyze the functional connectivity and the eigenvalues of the brain functional network. We find that ADHD patients have increased undirected functional connectivity, reflecting a higher degree of linear dependence between regions, and increased directed functional connectivity, indicating stronger causality and more transmission of information among brain regions. More importantly, we explore the randomness of the undirected and directed functional networks using RMT. We find that for ADHD patients, the undirected functional network is more orderly than that for normal subjects, which indicates an abnormal increase in undirected functional connectivity. In addition, we find that the directed functional networks are more random, which reveals greater disorder in causality and more chaotic information flow among brain regions in ADHD patients. Our results not only further confirm the efficacy of RMT in characterizing the intrinsic properties of brain functional networks but also provide insights into the possibilities RMT offers for improving clinical diagnoses and treatment evaluations for ADHD patients.
2009-02-03
computational approach to accommodation coefficients and its application to noble gases on aluminum surface Nathaniel Selden Uruversity of Southern Cahfornia, Los ...8217 ,. 0.’ a~ .......,..,P. • " ,,-0, "p"’U".. ,Po"D.’ 0.’P.... uro . P." FIG. 5: Experimental and computed radiometri~ force for argon (left), xenon
Determination of aerodynamic sensitivity coefficients for wings in transonic flow
NASA Technical Reports Server (NTRS)
Carlson, Leland A.; El-Banna, Hesham M.
1992-01-01
The quasianalytical approach is applied to the 3-D full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. The quasianalytical approach is believed to be reasonably accurate and computationally efficient for 3-D problems.
NASA Technical Reports Server (NTRS)
Tomberlin, T. J.
1985-01-01
Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.
Sample size calculations for the design of cluster randomized trials: A summary of methodology.
Gao, Fei; Earnest, Arul; Matchar, David B; Campbell, Michael J; Machin, David
2015-05-01
Cluster randomized trial designs are growing in popularity in, for example, cardiovascular medicine research and other clinical areas and parallel statistical developments concerned with the design and analysis of these trials have been stimulated. Nevertheless, reviews suggest that design issues associated with cluster randomized trials are often poorly appreciated and there remain inadequacies in, for example, describing how the trial size is determined and the associated results are presented. In this paper, our aim is to provide pragmatic guidance for researchers on the methods of calculating sample sizes. We focus attention on designs with the primary purpose of comparing two interventions with respect to continuous, binary, ordered categorical, incidence rate and time-to-event outcome variables. Issues of aggregate and non-aggregate cluster trials, adjustment for variation in cluster size and the effect size are detailed. The problem of establishing the anticipated magnitude of between- and within-cluster variation to enable planning values of the intra-cluster correlation coefficient and the coefficient of variation are also described. Illustrative examples of calculations of trial sizes for each endpoint type are included. Copyright © 2015 Elsevier Inc. All rights reserved.
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Unraveling spurious properties of interaction networks with tailored random networks.
Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus
2011-01-01
We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures--known for their complex spatial and temporal dynamics--we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis.
Unraveling Spurious Properties of Interaction Networks with Tailored Random Networks
Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus
2011-01-01
We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures – known for their complex spatial and temporal dynamics – we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis. PMID:21850239
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lanza, Mathieu; Lique, François, E-mail: francois.lique@univ-lehavre.fr
The determination of hyperfine structure resolved excitation cross sections and rate coefficients due to H{sub 2} collisions is required to interpret astronomical spectra. In this paper, we present several theoretical approaches to compute these data. An almost exact recoupling approach and approximate sudden methods are presented. We apply these different approaches to the HCl–H{sub 2} collisional system in order to evaluate their respective accuracy. HCl–H{sub 2} hyperfine structure resolved cross sections and rate coefficients are then computed using recoupling and approximate sudden methods. As expected, the approximate sudden approaches are more accurate when the collision energy increases and the resultsmore » suggest that these approaches work better for para-H{sub 2} than for ortho-H{sub 2} colliding partner. For the first time, we present HCl–H{sub 2} hyperfine structure resolved rate coefficients, computed here for temperatures ranging from 5 to 300 K. The usual Δj{sub 1} = ΔF{sub 1} propensity rules are observed for the hyperfine transitions. The new rate coefficients will significantly help the interpretation of interstellar HCl emission lines observed with current and future telescopes. We expect that these new data will allow a better determination of the HCl abundance in the interstellar medium, that is crucial to understand the interstellar chlorine chemistry.« less
Nagayama, Hirofumi; Tomori, Kounosuke; Ohno, Kanta; Takahashi, Kayoko; Ogahara, Kakuya; Sawada, Tatsunori; Uezu, Sei; Nagatani, Ryutaro; Yamauchi, Keita
2016-01-01
Background Care-home residents are mostly inactive, have little interaction with staff, and are dependent on staff to engage in daily occupations. We recently developed an iPad application called the Aid for Decision-making in Occupation Choice (ADOC) to promote shared decision-making in activities and occupation-based goal setting by choosing from illustrations describing daily activities. This study aimed to evaluate if interventions based on occupation-based goal setting using the ADOC could focus on meaningful activities to improve quality of life and independent activities of daily living, with greater cost-effectiveness than an impairment-based approach as well as to evaluate the feasibility of conducting a large cluster, randomized controlled trial. Method In this single (assessor)-blind pilot cluster randomized controlled trial, the intervention group (ADOC group) received occupational therapy based on occupation-based goal setting using the ADOC, and the interventions were focused on meaningful occupations. The control group underwent an impairment-based approach focused on restoring capacities, without goal setting tools. In both groups, the 20-minute individualized intervention sessions were conducted twice a week for 4 months. Main Outcome Measures Short Form-36 (SF-36) score, SF-6D utility score, quality adjusted life years (QALY), Barthel Index, and total care cost. Results We randomized and analyzed 12 facilities (44 participants, 18.5% drop-out rate), with 6 facilities each allocated to the ADOC (n = 23) and control (n = 21) groups. After the 4-month intervention, the ADOC group had a significantly greater change in the BI score, with improved scores (P = 0.027, 95% CI 0.41 to 6.87, intracluster correlation coefficient = 0.14). No other outcome was significantly different. The incremental cost-effectiveness ratio, calculated using the change in BI score, was $63.1. Conclusion The results suggest that occupational therapy using the ADOC for older residents might be effective and cost-effective. We also found that conducting an RCT in the occupational therapy setting is feasible. Trial Registration UMIN Clinical Trials Registry UMIN000012994 PMID:26930191
Nagayama, Hirofumi; Tomori, Kounosuke; Ohno, Kanta; Takahashi, Kayoko; Ogahara, Kakuya; Sawada, Tatsunori; Uezu, Sei; Nagatani, Ryutaro; Yamauchi, Keita
2016-01-01
Care-home residents are mostly inactive, have little interaction with staff, and are dependent on staff to engage in daily occupations. We recently developed an iPad application called the Aid for Decision-making in Occupation Choice (ADOC) to promote shared decision-making in activities and occupation-based goal setting by choosing from illustrations describing daily activities. This study aimed to evaluate if interventions based on occupation-based goal setting using the ADOC could focus on meaningful activities to improve quality of life and independent activities of daily living, with greater cost-effectiveness than an impairment-based approach as well as to evaluate the feasibility of conducting a large cluster, randomized controlled trial. In this single (assessor)-blind pilot cluster randomized controlled trial, the intervention group (ADOC group) received occupational therapy based on occupation-based goal setting using the ADOC, and the interventions were focused on meaningful occupations. The control group underwent an impairment-based approach focused on restoring capacities, without goal setting tools. In both groups, the 20-minute individualized intervention sessions were conducted twice a week for 4 months. Short Form-36 (SF-36) score, SF-6D utility score, quality adjusted life years (QALY), Barthel Index, and total care cost. We randomized and analyzed 12 facilities (44 participants, 18.5% drop-out rate), with 6 facilities each allocated to the ADOC (n = 23) and control (n = 21) groups. After the 4-month intervention, the ADOC group had a significantly greater change in the BI score, with improved scores (P = 0.027, 95% CI 0.41 to 6.87, intracluster correlation coefficient = 0.14). No other outcome was significantly different. The incremental cost-effectiveness ratio, calculated using the change in BI score, was $63.1. The results suggest that occupational therapy using the ADOC for older residents might be effective and cost-effective. We also found that conducting an RCT in the occupational therapy setting is feasible. UMIN Clinical Trials Registry UMIN000012994.
Determining the Drag Coefficient of Rotational Symmetric Objects Falling through Liquids
ERIC Educational Resources Information Center
Houari, Ahmed
2012-01-01
I will propose here a kinematic approach for measuring the drag coefficient of rotational symmetric objects falling through liquids. For this, I will show that one can obtain a measurement of the drag coefficient of a rotational symmetric object by numerically solving the equation of motion describing its fall through a known liquid contained in a…
Graphical Solution of the Monic Quadratic Equation with Complex Coefficients
ERIC Educational Resources Information Center
Laine, A. D.
2015-01-01
There are many geometrical approaches to the solution of the quadratic equation with real coefficients. In this article it is shown that the monic quadratic equation with complex coefficients can also be solved graphically, by the intersection of two hyperbolas; one hyperbola being derived from the real part of the quadratic equation and one from…
NASA Astrophysics Data System (ADS)
Wang, Dong; Ding, Hao; Singh, Vijay P.; Shang, Xiaosan; Liu, Dengfeng; Wang, Yuankun; Zeng, Xiankui; Wu, Jichun; Wang, Lachun; Zou, Xinqing
2015-05-01
For scientific and sustainable management of water resources, hydrologic and meteorologic data series need to be often extended. This paper proposes a hybrid approach, named WA-CM (wavelet analysis-cloud model), for data series extension. Wavelet analysis has time-frequency localization features, known as "mathematics microscope," that can decompose and reconstruct hydrologic and meteorologic series by wavelet transform. The cloud model is a mathematical representation of fuzziness and randomness and has strong robustness for uncertain data. The WA-CM approach first employs the wavelet transform to decompose the measured nonstationary series and then uses the cloud model to develop an extension model for each decomposition layer series. The final extension is obtained by summing the results of extension of each layer. Two kinds of meteorologic and hydrologic data sets with different characteristics and different influence of human activity from six (three pairs) representative stations are used to illustrate the WA-CM approach. The approach is also compared with four other methods, which are conventional correlation extension method, Kendall-Theil robust line method, artificial neural network method (back propagation, multilayer perceptron, and radial basis function), and single cloud model method. To evaluate the model performance completely and thoroughly, five measures are used, which are relative error, mean relative error, standard deviation of relative error, root mean square error, and Thiel inequality coefficient. Results show that the WA-CM approach is effective, feasible, and accurate and is found to be better than other four methods compared. The theory employed and the approach developed here can be applied to extension of data in other areas as well.
A pseudo-thermodynamic description of dispersion for nanocomposites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Yan; Beaucage, Gregory; Vogtt, Karsten
Dispersion in polymer nanocomposites is determined by the kinetics of mixing and chemical affinity. Compounds like reinforcing filler/elastomer blends display some similarity to colloidal solutions in that the filler particles are close to randomly dispersed through processing. It is attractive to apply a pseudo-thermodynamic approach taking advantage of this analogy between the kinetics of mixing for polymer compounds and thermally driven dispersion for colloids. In order to demonstrate this pseudo-thermodynamic approach, two polybutadienes and one polyisoprene were milled with three carbon blacks and two silicas. These samples were examined using small-angle x-ray scattering as a function of filler concentration tomore » determine a pseudo-second order virial coefficient, A2, which is used as an indicator for compatibility of the filler and polymer. It is found that A2 follows the expected behavior with lower values for smaller primary particles indicating that smaller particles are less compatible and more difficult to mix. The measured values of A2 can be used to specify repulsive interaction potentials for coarse grain DPD simulations of filler/elastomer systems. In addition, new methods to quantify the filler percolation threshold and filler mesh size as a function of filler concentration are obtained. Moreover, the results represent a new approach to understanding and predicting compatibility in polymer nanocomposites based on a pseudo-thermodynamic approach.« less
A pseudo-thermodynamic description of dispersion for nanocomposites
Jin, Yan; Beaucage, Gregory; Vogtt, Karsten; ...
2017-09-18
Dispersion in polymer nanocomposites is determined by the kinetics of mixing and chemical affinity. Compounds like reinforcing filler/elastomer blends display some similarity to colloidal solutions in that the filler particles are close to randomly dispersed through processing. It is attractive to apply a pseudo-thermodynamic approach taking advantage of this analogy between the kinetics of mixing for polymer compounds and thermally driven dispersion for colloids. In order to demonstrate this pseudo-thermodynamic approach, two polybutadienes and one polyisoprene were milled with three carbon blacks and two silicas. These samples were examined using small-angle x-ray scattering as a function of filler concentration tomore » determine a pseudo-second order virial coefficient, A2, which is used as an indicator for compatibility of the filler and polymer. It is found that A2 follows the expected behavior with lower values for smaller primary particles indicating that smaller particles are less compatible and more difficult to mix. The measured values of A2 can be used to specify repulsive interaction potentials for coarse grain DPD simulations of filler/elastomer systems. In addition, new methods to quantify the filler percolation threshold and filler mesh size as a function of filler concentration are obtained. Moreover, the results represent a new approach to understanding and predicting compatibility in polymer nanocomposites based on a pseudo-thermodynamic approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Third-harmonic generation of a laser-driven quantum dot with impurity
NASA Astrophysics Data System (ADS)
Sakiroglu, S.; Kilic, D. Gul; Yesilgul, U.; Ungan, F.; Kasapoglu, E.; Sari, H.; Sokmen, I.
2018-06-01
The third-harmonic generation (THG) coefficient for a laser-driven quantum dot with an on-center Gaussian impurity under static magnetic field is theoretically investigated. Laser field effect is treated within the high-frequency Floquet approach and the analytical expression of the THG coefficient is deduced from the compact density-matrix approach. The numerical results demonstrate that the application of intense laser field causes substantial changes on the behavior of THG. In addition the position and magnitude of the resonant peak of THG coefficient is significantly affected by the magnetic field, quantum dot size and the characteristic parameters of the impurity potential.
A systematic study of multiple minerals precipitation modelling in wastewater treatment.
Kazadi Mbamba, Christian; Tait, Stephan; Flores-Alsina, Xavier; Batstone, Damien J
2015-11-15
Mineral solids precipitation is important in wastewater treatment. However approaches to minerals precipitation modelling are varied, often empirical, and mostly focused on single precipitate classes. A common approach, applicable to multi-species precipitates, is needed to integrate into existing wastewater treatment models. The present study systematically tested a semi-mechanistic modelling approach, using various experimental platforms with multiple minerals precipitation. Experiments included dynamic titration with addition of sodium hydroxide to synthetic wastewater, and aeration to progressively increase pH and induce precipitation in real piggery digestate and sewage sludge digestate. The model approach consisted of an equilibrium part for aqueous phase reactions and a kinetic part for minerals precipitation. The model was fitted to dissolved calcium, magnesium, total inorganic carbon and phosphate. Results indicated that precipitation was dominated by the mineral struvite, forming together with varied and minor amounts of calcium phosphate and calcium carbonate. The model approach was noted to have the advantage of requiring a minimal number of fitted parameters, so the model was readily identifiable. Kinetic rate coefficients, which were statistically fitted, were generally in the range 0.35-11.6 h(-1) with confidence intervals of 10-80% relative. Confidence regions for the kinetic rate coefficients were often asymmetric with model-data residuals increasing more gradually with larger coefficient values. This suggests that a large kinetic coefficient could be used when actual measured data is lacking for a particular precipitate-matrix combination. Correlation between the kinetic rate coefficients of different minerals was low, indicating that parameter values for individual minerals could be independently fitted (keeping all other model parameters constant). Implementation was therefore relatively flexible, and would be readily expandable to include other minerals. Copyright © 2015 Elsevier Ltd. All rights reserved.
Diffusive mixing and Tsallis entropy
O'Malley, Daniel; Vesselinov, Velimir V.; Cushman, John H.
2015-04-29
Brownian motion, the classical diffusive process, maximizes the Boltzmann-Gibbs entropy. The Tsallis q-entropy, which is non-additive, was developed as an alternative to the classical entropy for systems which are non-ergodic. A generalization of Brownian motion is provided that maximizes the Tsallis entropy rather than the Boltzmann-Gibbs entropy. This process is driven by a Brownian measure with a random diffusion coefficient. In addition, the distribution of this coefficient is derived as a function of q for 1 < q < 3. Applications to transport in porous media are considered.
Characteristic-eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1989-01-01
Lumley's proper orthogonal decomposition technique is applied to the turbulent flow in a channel. Coherent structures are extracted by decomposing the velocity field into characteristic eddies with random coefficients. A generalization of the shot-noise expansion is used to determine the characteristic eddies in homogeneous spatial directions. Three different techniques are used to determine the phases of the Fourier coefficients in the expansion: (1) one based on the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Similar results are found from each of these techniques.
A Random Finite Set Approach to Space Junk Tracking and Identification
2014-09-03
Final 3. DATES COVERED (From - To) 31 Jan 13 – 29 Apr 14 4. TITLE AND SUBTITLE A Random Finite Set Approach to Space Junk Tracking and...01-2013 to 29-04-2014 4. TITLE AND SUBTITLE A Random Finite Set Approach to Space Junk Tracking and Identification 5a. CONTRACT NUMBER FA2386-13...Prescribed by ANSI Std Z39-18 A Random Finite Set Approach to Space Junk Tracking and Indentification Ba-Ngu Vo1, Ba-Tuong Vo1, 1Department of
Recovery of singularities from a backscattering Born approximation for a biharmonic operator in 3D
NASA Astrophysics Data System (ADS)
Tyni, Teemu
2018-04-01
We consider a backscattering Born approximation for a perturbed biharmonic operator in three space dimensions. Previous results on this approach for biharmonic operator use the fact that the coefficients are real-valued to obtain the reconstruction of singularities in the coefficients. In this text we drop the assumption about real-valued coefficients and also establish the recovery of singularities for complex coefficients. The proof uses mapping properties of the Radon transform.
Mixing Efficiency in the Ocean.
Gregg, M C; D'Asaro, E A; Riley, J J; Kunze, E
2018-01-03
Mixing efficiency is the ratio of the net change in potential energy to the energy expended in producing the mixing. Parameterizations of efficiency and of related mixing coefficients are needed to estimate diapycnal diffusivity from measurements of the turbulent dissipation rate. Comparing diffusivities from microstructure profiling with those inferred from the thickening rate of four simultaneous tracer releases has verified, within observational accuracy, 0.2 as the mixing coefficient over a 30-fold range of diapycnal diffusivities. Although some mixing coefficients can be estimated from pycnocline measurements, at present mixing efficiency must be obtained from channel flows, laboratory experiments, and numerical simulations. Reviewing the different approaches demonstrates that estimates and parameterizations for mixing efficiency and coefficients are not converging beyond the at-sea comparisons with tracer releases, leading to recommendations for a community approach to address this important issue.
Mixing Efficiency in the Ocean
NASA Astrophysics Data System (ADS)
Gregg, M. C.; D'Asaro, E. A.; Riley, J. J.; Kunze, E.
2018-01-01
Mixing efficiency is the ratio of the net change in potential energy to the energy expended in producing the mixing. Parameterizations of efficiency and of related mixing coefficients are needed to estimate diapycnal diffusivity from measurements of the turbulent dissipation rate. Comparing diffusivities from microstructure profiling with those inferred from the thickening rate of four simultaneous tracer releases has verified, within observational accuracy, 0.2 as the mixing coefficient over a 30-fold range of diapycnal diffusivities. Although some mixing coefficients can be estimated from pycnocline measurements, at present mixing efficiency must be obtained from channel flows, laboratory experiments, and numerical simulations. Reviewing the different approaches demonstrates that estimates and parameterizations for mixing efficiency and coefficients are not converging beyond the at-sea comparisons with tracer releases, leading to recommendations for a community approach to address this important issue.
Modelling Nonlinear Dynamic Textures using Hybrid DWT-DCT and Kernel PCA with GPU
NASA Astrophysics Data System (ADS)
Ghadekar, Premanand Pralhad; Chopade, Nilkanth Bhikaji
2016-12-01
Most of the real-world dynamic textures are nonlinear, non-stationary, and irregular. Nonlinear motion also has some repetition of motion, but it exhibits high variation, stochasticity, and randomness. Hybrid DWT-DCT and Kernel Principal Component Analysis (KPCA) with YCbCr/YIQ colour coding using the Dynamic Texture Unit (DTU) approach is proposed to model a nonlinear dynamic texture, which provides better results than state-of-art methods in terms of PSNR, compression ratio, model coefficients, and model size. Dynamic texture is decomposed into DTUs as they help to extract temporal self-similarity. Hybrid DWT-DCT is used to extract spatial redundancy. YCbCr/YIQ colour encoding is performed to capture chromatic correlation. KPCA is applied to capture nonlinear motion. Further, the proposed algorithm is implemented on Graphics Processing Unit (GPU), which comprise of hundreds of small processors to decrease time complexity and to achieve parallelism.
Optimum decoding and detection of a multiplicative amplitude-encoded watermark
NASA Astrophysics Data System (ADS)
Barni, Mauro; Bartolini, Franco; De Rosa, Alessia; Piva, Alessandro
2002-04-01
The aim of this paper is to present a novel approach to the decoding and the detection of multibit, multiplicative, watermarks embedded in the frequency domain. Watermark payload is conveyed by amplitude modulating a pseudo-random sequence, thus resembling conventional DS spread spectrum techniques. As opposed to conventional communication systems, though, the watermark is embedded within the host DFT coefficients by using a multiplicative rule. The watermark decoding technique presented in the paper is an optimum one, in that it minimizes the bit error probability. The problem of watermark presence assessment, which is often underestimated by state-of-the-art research on multibit watermarking, is addressed too, and the optimum detection rule derived according to the Neyman-Pearson criterion. Experimental results are shown both to demonstrate the validity of the theoretical analysis and to highlight the good performance of the proposed system.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1977-01-01
A statistical decision procedure called chain pooling had been developed for model selection in fitting the results of a two-level fixed-effects full or fractional factorial experiment not having replication. The basic strategy included the use of one nominal level of significance for a preliminary test and a second nominal level of significance for the final test. The subject has been reexamined from the point of view of using as many as three successive statistical model deletion procedures in fitting the results of a single experiment. The investigation consisted of random number studies intended to simulate the results of a proposed aircraft turbine-engine rotor-burst-protection experiment. As a conservative approach, population model coefficients were chosen to represent a saturated 2 to the 4th power experiment with a distribution of parameter values unfavorable to the decision procedures. Three model selection strategies were developed.
Zhang, Xue-Xi; Yin, Jian-Hua; Mao, Zhi-Hua; Xia, Yang
2015-01-01
Abstract. Fourier transform infrared imaging (FTIRI) combined with chemometrics algorithm has strong potential to obtain complex chemical information from biology tissues. FTIRI and partial least squares-discriminant analysis (PLS-DA) were used to differentiate healthy and osteoarthritic (OA) cartilages for the first time. A PLS model was built on the calibration matrix of spectra that was randomly selected from the FTIRI spectral datasets of healthy and lesioned cartilage. Leave-one-out cross-validation was performed in the PLS model, and the fitting coefficient between actual and predicted categorical values of the calibration matrix reached 0.95. In the calibration and prediction matrices, the successful identifying percentages of healthy and lesioned cartilage spectra were 100% and 90.24%, respectively. These results demonstrated that FTIRI combined with PLS-DA could provide a promising approach for the categorical identification of healthy and OA cartilage specimens. PMID:26057029
Weak signal transmission in complex networks and its application in detecting connectivity.
Liang, Xiaoming; Liu, Zonghua; Li, Baowen
2009-10-01
We present a network model of coupled oscillators to study how a weak signal is transmitted in complex networks. Through both theoretical analysis and numerical simulations, we find that the response of other nodes to the weak signal decays exponentially with their topological distance to the signal source and the coupling strength between two neighboring nodes can be figured out by the responses. This finding can be conveniently used to detect the topology of unknown network, such as the degree distribution, clustering coefficient and community structure, etc., by repeatedly choosing different nodes as the signal source. Through four typical networks, i.e., the regular one dimensional, small world, random, and scale-free networks, we show that the features of network can be approximately given by investigating many fewer nodes than the network size, thus our approach to detect the topology of unknown network may be efficient in practical situations with large network size.
Zhang, Xue-Xi; Yin, Jian-Hua; Mao, Zhi-Hua; Xia, Yang
2015-06-01
Fourier transform infrared imaging (FTIRI) combined with chemometrics algorithm has strong potential to obtain complex chemical information from biology tissues. FTIRI and partial least squares-discriminant analysis (PLS-DA) were used to differentiate healthy and osteoarthritic (OA) cartilages for the first time. A PLS model was built on the calibration matrix of spectra that was randomly selected from the FTIRI spectral datasets of healthy and lesioned cartilage. Leave-one-out cross-validation was performed in the PLS model, and the fitting coefficient between actual and predicted categorical values of the calibration matrix reached 0.95. In the calibration and prediction matrices, the successful identifying percentages of healthy and lesioned cartilage spectra were 100% and 90.24%, respectively. These results demonstrated that FTIRI combined with PLS-DA could provide a promising approach for the categorical identification of healthy and OA cartilage specimens.
Isotropic stochastic rotation dynamics
NASA Astrophysics Data System (ADS)
Mühlbauer, Sebastian; Strobl, Severin; Pöschel, Thorsten
2017-12-01
Stochastic rotation dynamics (SRD) is a widely used method for the mesoscopic modeling of complex fluids, such as colloidal suspensions or multiphase flows. In this method, however, the underlying Cartesian grid defining the coarse-grained interaction volumes induces anisotropy. We propose an isotropic, lattice-free variant of stochastic rotation dynamics, termed iSRD. Instead of Cartesian grid cells, we employ randomly distributed spherical interaction volumes. This eliminates the requirement of a grid shift, which is essential in standard SRD to maintain Galilean invariance. We derive analytical expressions for the viscosity and the diffusion coefficient in relation to the model parameters, which show excellent agreement with the results obtained in iSRD simulations. The proposed algorithm is particularly suitable to model systems bound by walls of complex shape, where the domain cannot be meshed uniformly. The presented approach is not limited to SRD but is applicable to any other mesoscopic method, where particles interact within certain coarse-grained volumes.
A measure of association for ordered categorical data in population-based studies
Nelson, Kerrie P; Edwards, Don
2016-01-01
Ordinal classification scales are commonly used to define a patient’s disease status in screening and diagnostic tests such as mammography. Challenges arise in agreement studies when evaluating the association between many raters’ classifications of patients’ disease or health status when an ordered categorical scale is used. In this paper, we describe a population-based approach and chance-corrected measure of association to evaluate the strength of relationship between multiple raters’ ordinal classifications where any number of raters can be accommodated. In contrast to Shrout and Fleiss’ intraclass correlation coefficient, the proposed measure of association is invariant with respect to changes in disease prevalence. We demonstrate how unique characteristics of individual raters can be explored using random effects. Simulation studies are conducted to demonstrate the properties of the proposed method under varying assumptions. The methods are applied to two large-scale agreement studies of breast cancer screening and prostate cancer severity. PMID:27184590
Toropova, Alla P; Toropov, Andrey A; Rallo, Robert; Leszczynska, Danuta; Leszczynski, Jerzy
2015-02-01
The Monte Carlo technique has been used to build up quantitative structure-activity relationships (QSARs) for prediction of dark cytotoxicity and photo-induced cytotoxicity of metal oxide nanoparticles to bacteria Escherichia coli (minus logarithm of lethal concentration for 50% bacteria pLC50, LC50 in mol/L). The representation of nanoparticles include (i) in the case of the dark cytotoxicity a simplified molecular input-line entry system (SMILES), and (ii) in the case of photo-induced cytotoxicity a SMILES plus symbol '^'. The predictability of the approach is checked up with six random distributions of available data into the visible training and calibration sets, and invisible validation set. The statistical characteristics of these models are correlation coefficient 0.90-0.94 (training set) and 0.73-0.98 (validation set). Copyright © 2014 Elsevier Inc. All rights reserved.
Neutral solute transport across osteochondral interface: A finite element approach.
Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A
2016-12-08
Investigation of the solute transfer across articular cartilage and subchondral bone plate could nurture the understanding of the mechanisms of osteoarthritis (OA) progression. In the current study, we approached the transport of neutral solutes in human (slight OA) and equine (healthy) samples using both computed tomography and biphasic-solute finite element modeling. We developed a multi-zone biphasic-solute finite element model (FEM) accounting for the inhomogeneity of articular cartilage (superficial, middle and deep zones) and subchondral bone plate. Fitting the FEM model to the concentration-time curves of the cartilage and the equilibrium concentration of the subchondral plate/calcified cartilage enabled determination of the diffusion coefficients in the superficial, middle and deep zones of cartilage and subchondral plate. We found slightly higher diffusion coefficients for all zones in the human samples as compared to the equine samples. Generally the diffusion coefficient in the superficial zone of human samples was about 3-fold higher than the middle zone, the diffusion coefficient of the middle zone was 1.5-fold higher than that of the deep zone, and the diffusion coefficient of the deep zone was 1.5-fold higher than that of the subchondral plate/calcified cartilage. Those ratios for equine samples were 9, 2 and 1.5, respectively. Regardless of the species considered, there is a gradual decrease of the diffusion coefficient as one approaches the subchondral plate, whereas the rate of decrease is dependent on the type of species. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.
Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N
2016-01-01
Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.
Identifying Bearing Rotordynamic Coefficients using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Miller, Brad A.; Howard, Samuel A.
2008-01-01
An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor-dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter s performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor-bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.
Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G
2011-06-28
We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.
Estimation of methane emission rate changes using age-defined waste in a landfill site.
Ishii, Kazuei; Furuichi, Toru
2013-09-01
Long term methane emissions from landfill sites are often predicted by first-order decay (FOD) models, in which the default coefficients of the methane generation potential and the methane generation rate given by the Intergovernmental Panel on Climate Change (IPCC) are usually used. However, previous studies have demonstrated the large uncertainty in these coefficients because they are derived from a calibration procedure under ideal steady-state conditions, not actual landfill site conditions. In this study, the coefficients in the FOD model were estimated by a new approach to predict more precise long term methane generation by considering region-specific conditions. In the new approach, age-defined waste samples, which had been under the actual landfill site conditions, were collected in Hokkaido, Japan (in cold region), and the time series data on the age-defined waste sample's methane generation potential was used to estimate the coefficients in the FOD model. The degradation coefficients were 0.0501/y and 0.0621/y for paper and food waste, and the methane generation potentials were 214.4 mL/g-wet waste and 126.7 mL/g-wet waste for paper and food waste, respectively. These coefficients were compared with the default coefficients given by the IPCC. Although the degradation coefficient for food waste was smaller than the default value, the other coefficients were within the range of the default coefficients. With these new coefficients to calculate methane generation, the long term methane emissions from the landfill site was estimated at 1.35×10(4)m(3)-CH(4), which corresponds to approximately 2.53% of the total carbon dioxide emissions in the city (5.34×10(5)t-CO(2)/y). Copyright © 2013 Elsevier Ltd. All rights reserved.
Li, Huaizhou; Zhou, Haiyan; Yang, Yang; Wang, Haiyuan; Zhong, Ning
2017-10-01
Previous studies have reported the enhanced randomization of functional brain networks in patients with major depressive disorder (MDD). However, little is known about the changes of key nodal attributes for randomization, the resilience of network, and the clinical significance of the alterations. In this study, we collected the resting-state functional MRI data from 19 MDD patients and 19 healthy control (HC) individuals. Graph theory analysis showed that decreases were found in the small-worldness, clustering coefficient, local efficiency, and characteristic path length (i.e., increase of global efficiency) in the network of MDD group compared with HC group, which was consistent with previous findings and suggested the development toward randomization in the brain network in MDD. In addition, the greater resilience under the targeted attacks was also found in the network of patients with MDD. Furthermore, the abnormal nodal properties were found, including clustering coefficients and nodal efficiencies in the left orbital superior frontal gyrus, bilateral insula, left amygdala, right supramarginal gyrus, left putamen, left posterior cingulate cortex, left angular gyrus. Meanwhile, the correlation analysis showed that most of these abnormal areas were associated with the clinical status. The observed increased randomization and resilience in MDD might be related to the abnormal hub nodes in the brain networks, which were attacked by the disease pathology. Our findings provide new evidence to indicate that the weakening of specialized regions and the enhancement of whole brain integrity could be the potential endophenotype of the depressive pathology. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Elbanna, Hesham M.; Carlson, Leland A.
1992-01-01
The quasi-analytical approach is applied to the three-dimensional full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. Results are compared to those obtained by the direct finite difference approach and both methods are evaluated to determine their computational accuracy and efficiency. The quasi-analytical approach is shown to be accurate and efficient for large aerodynamic systems.
Relaxation dynamics of maximally clustered networks
NASA Astrophysics Data System (ADS)
Klaise, Janis; Johnson, Samuel
2018-01-01
We study the relaxation dynamics of fully clustered networks (maximal number of triangles) to an unclustered state under two different edge dynamics—the double-edge swap, corresponding to degree-preserving randomization of the configuration model, and single edge replacement, corresponding to full randomization of the Erdős-Rényi random graph. We derive expressions for the time evolution of the degree distribution, edge multiplicity distribution and clustering coefficient. We show that under both dynamics networks undergo a continuous phase transition in which a giant connected component is formed. We calculate the position of the phase transition analytically using the Erdős-Rényi phenomenology.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Local Geostatistical Models and Big Data in Hydrological and Ecological Applications
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios
2015-04-01
The advent of the big data era creates new opportunities for environmental and ecological modelling but also presents significant challenges. The availability of remote sensing images and low-cost wireless sensor networks implies that spatiotemporal environmental data to cover larger spatial domains at higher spatial and temporal resolution for longer time windows. Handling such voluminous data presents several technical and scientific challenges. In particular, the geostatistical methods used to process spatiotemporal data need to overcome the dimensionality curse associated with the need to store and invert large covariance matrices. There are various mathematical approaches for addressing the dimensionality problem, including change of basis, dimensionality reduction, hierarchical schemes, and local approximations. We present a Stochastic Local Interaction (SLI) model that can be used to model local correlations in spatial data. SLI is a random field model suitable for data on discrete supports (i.e., regular lattices or irregular sampling grids). The degree of localization is determined by means of kernel functions and appropriate bandwidths. The strength of the correlations is determined by means of coefficients. In the "plain vanilla" version the parameter set involves scale and rigidity coefficients as well as a characteristic length. The latter determines in connection with the rigidity coefficient the correlation length of the random field. The SLI model is based on statistical field theory and extends previous research on Spartan spatial random fields [2,3] from continuum spaces to explicitly discrete supports. The SLI kernel functions employ adaptive bandwidths learned from the sampling spatial distribution [1]. The SLI precision matrix is expressed explicitly in terms of the model parameter and the kernel function. Hence, covariance matrix inversion is not necessary for parameter inference that is based on leave-one-out cross validation. This property helps to overcome a significant computational bottleneck of geostatistical models due to the poor scaling of the matrix inversion [4,5]. We present applications to real and simulated data sets, including the Walker lake data, and we investigate the SLI performance using various statistical cross validation measures. References [1] T. Hofmann, B. Schlkopf, A.J. Smola, Annals of Statistics, 36, 1171-1220 (2008). [2] D. T. Hristopulos, SIAM Journal on Scientific Computing, 24(6): 2125-2162 (2003). [3] D. T. Hristopulos and S. N. Elogne, IEEE Transactions on Signal Processing, 57(9): 3475-3487 (2009) [4] G. Jona Lasinio, G. Mastrantonio, and A. Pollice, Statistical Methods and Applications, 22(1):97-112 (2013) [5] Sun, Y., B. Li, and M. G. Genton (2012). Geostatistics for large datasets. In: Advances and Challenges in Space-time Modelling of Natural Events, Lecture Notes in Statistics, pp. 55-77. Springer, Berlin-Heidelberg.
Sell, Andrew; Fadaei, Hossein; Kim, Myeongsub; Sinton, David
2013-01-02
Predicting carbon dioxide (CO(2)) security and capacity in sequestration requires knowledge of CO(2) diffusion into reservoir fluids. In this paper we demonstrate a microfluidic based approach to measuring the mutual diffusion coefficient of carbon dioxide in water and brine. The approach enables formation of fresh CO(2)-liquid interfaces; the resulting diffusion is quantified by imaging fluorescence quenching of a pH-dependent dye, and subsequent analyses. This method was applied to study the effects of site-specific variables--CO(2) pressure and salinity levels--on the diffusion coefficient. In contrast to established, macro-scale pressure-volume-temperature cell methods that require large sample volumes and testing periods of hours/days, this approach requires only microliters of sample, provides results within minutes, and isolates diffusive mass transport from convective effects. The measured diffusion coefficient of CO(2) in water was constant (1.86 [± 0.26] × 10(-9) m(2)/s) over the range of pressures (5-50 bar) tested at 26 °C, in agreement with existing models. The effects of salinity were measured with solutions of 0-5 M NaCl, where the diffusion coefficient varied up to 3 times. These experimental data support existing theory and demonstrate the applicability of this method for reservoir-specific testing.
Torus Approach in Gravity Field Determination from Simulated GOCE Gravity Gradients
NASA Astrophysics Data System (ADS)
Liu, Huanling; Wen, Hanjiang; Xu, Xinyu; Zhu, Guangbin
2016-08-01
In Torus approach, observations are projected to the nominal orbits with constant radius and inclination, lumped coefficients provides a linear relationship between observations and spherical harmonic coefficients. Based on the relationship, two-dimensional FFT and block-diagonal least-squares adjustment are used to recover Earth's gravity field model. The Earth's gravity field model complete to degree and order 200 is recovered using simulated satellite gravity gradients on a torus grid, and the degree median error is smaller than 10-18, which shows the effectiveness of Torus approach. EGM2008 is employed as a reference model and the gravity field model is resolved using the simulated observations without noise given on GOCE orbits of 61 days. The error from reduction and interpolation can be mitigated by iterations. Due to polar gap, the precision of low-order coefficients is lower. Without considering these coefficients the maximum geoid degree error and cumulative error are 0.022mm and 0.099mm, respectively. The Earth's gravity field model is also recovered from simulated observations with white noise 5mE/Hz1/2, which is compared to that from direct method. In conclusion, it is demonstrated that Torus approach is a valid method for processing massive amount of GOCE gravity gradients.
Melse-Boonstra, A; Rexwinkel, H; Bulux, J; Solomons, N W; West, C E
1999-04-01
To compare methods for estimating discretionary salt intake, that is, salt added during food preparation and consumption in the home. The study was carried out in a rural Guatemalan village. Subjects were selected non-randomly, based on their willingness to cooperate. Nine mother-son dyads participated; the sons were aged 6-9 y. Three approaches for estimating the discretionary salt consumption were used: 24 h recall; collection of duplicate portions of salt; and urinary excretion of lithium during consumption of lithium-labelled household salt. Total salt intake was assessed from the excretion of chloride over 24 h. The mean discretionary salt consumption based on lithium excretion for mothers was 3.9+/-2.0 g/d (mean +/- s.d.) and for children 1.3+/-0.6 g/d. Estimates from the 24 h recalls and from the duplicate portion method were approximately twice and three times those measured with the lithium-marker technique respectively. The salt intake estimated from the recall method was associated with the lithium-marker technique for both mothers and children (Spearman correlation coefficient, 0.76 and 0.70 respectively). The mean daily coefficient of variation in consumption of discretionary salt measured by the three methods, for mothers and boys respectively, were: lithium marker, 51.7 and 43.7%; 24 h recall, 65.8 and 50.7%; and duplicate portion, 51.0 and 62.6%. We conclude that an interview method for estimating discretionary salt intake may be a reasonable approach for determining the relative rank-order in a population, especially among female food preparers themselves, but may grossly overestimate the actual intake of salt added during food preparation and consumption.
NASA Technical Reports Server (NTRS)
Markus, Thorsten; Masson, Robert; Worby, Anthony; Lytle, Victoria; Kurtz, Nathan; Maksym, Ted
2011-01-01
In October 2003 a campaign on board the Australian icebreaker Aurora Australis had the objective to validate standard Aqua Advanced Microwave Scanning Radiometer (AMSR-E) sea-ice products. Additionally, the satellite laser altimeter on the Ice, Cloud and land Elevation Satellite (ICESat) was in operation. To capture the large-scale information on the sea-ice conditions necessary for satellite validation, the measurement strategy was to obtain large-scale sea-ice statistics using extensive sea-ice measurements in a Lagrangian approach. A drifting buoy array, spanning initially 50 km 100 km, was surveyed during the campaign. In situ measurements consisted of 12 transects, 50 500 m, with detailed snow and ice measurements as well as random snow depth sampling of floes within the buoy array using helicopters. In order to increase the amount of coincident in situ and satellite data an approach has been developed to extrapolate measurements in time and in space. Assuming no change in snow depth and freeboard occurred during the period of the campaign on the floes surveyed, we use buoy ice-drift information as well as daily estimates of thin-ice fraction and rough-ice vs smooth-ice fractions from AMSR-E and QuikSCAT, respectively, to estimate kilometer-scale snow depth and freeboard for other days. The results show that ICESat freeboard estimates have a mean difference of 1.8 cm when compared with the in situ data and a correlation coefficient of 0.6. Furthermore, incorporating ICESat roughness information into the AMSR-E snow depth algorithm significantly improves snow depth retrievals. Snow depth retrievals using a combination of AMSR-E and ICESat data agree with in situ data with a mean difference of 2.3 cm and a correlation coefficient of 0.84 with a negligible bias.
Random walk of passive tracers among randomly moving obstacles.
Gori, Matteo; Donato, Irene; Floriani, Elena; Nardecchia, Ilaria; Pettini, Marco
2016-04-14
This study is mainly motivated by the need of understanding how the diffusion behavior of a biomolecule (or even of a larger object) is affected by other moving macromolecules, organelles, and so on, inside a living cell, whence the possibility of understanding whether or not a randomly walking biomolecule is also subject to a long-range force field driving it to its target. By means of the Continuous Time Random Walk (CTRW) technique the topic of random walk in random environment is here considered in the case of a passively diffusing particle among randomly moving and interacting obstacles. The relevant physical quantity which is worked out is the diffusion coefficient of the passive tracer which is computed as a function of the average inter-obstacles distance. The results reported here suggest that if a biomolecule, let us call it a test molecule, moves towards its target in the presence of other independently interacting molecules, its motion can be considerably slowed down.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios, E-mail: georgios.karagiannis@pnnl.gov; Lin, Guang, E-mail: guang.lin@pnnl.gov
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, bymore » coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.« less
Wang, WeiBo; Sun, Wei; Wang, Wei; Szatkiewicz, Jin
2018-03-01
The application of high-throughput sequencing in a broad range of quantitative genomic assays (e.g., DNA-seq, ChIP-seq) has created a high demand for the analysis of large-scale read-count data. Typically, the genome is divided into tiling windows and windowed read-count data is generated for the entire genome from which genomic signals are detected (e.g. copy number changes in DNA-seq, enrichment peaks in ChIP-seq). For accurate analysis of read-count data, many state-of-the-art statistical methods use generalized linear models (GLM) coupled with the negative-binomial (NB) distribution by leveraging its ability for simultaneous bias correction and signal detection. However, although statistically powerful, the GLM+NB method has a quadratic computational complexity and therefore suffers from slow running time when applied to large-scale windowed read-count data. In this study, we aimed to speed up substantially the GLM+NB method by using a randomized algorithm and we demonstrate here the utility of our approach in the application of detecting copy number variants (CNVs) using a real example. We propose an efficient estimator, the randomized GLM+NB coefficients estimator (RGE), for speeding up the GLM+NB method. RGE samples the read-count data and solves the estimation problem on a smaller scale. We first theoretically validated the consistency and the variance properties of RGE. We then applied RGE to GENSENG, a GLM+NB based method for detecting CNVs. We named the resulting method as "R-GENSENG". Based on extensive evaluation using both simulated and empirical data, we concluded that R-GENSENG is ten times faster than the original GENSENG while maintaining GENSENG's accuracy in CNV detection. Our results suggest that RGE strategy developed here could be applied to other GLM+NB based read-count analyses, i.e. ChIP-seq data analysis, to substantially improve their computational efficiency while preserving the analytic power.
NASA Astrophysics Data System (ADS)
Xu, Yingru; Bernhard, Jonah E.; Bass, Steffen A.; Nahrgang, Marlene; Cao, Shanshan
2018-01-01
By applying a Bayesian model-to-data analysis, we estimate the temperature and momentum dependence of the heavy quark diffusion coefficient in an improved Langevin framework. The posterior range of the diffusion coefficient is obtained by performing a Markov chain Monte Carlo random walk and calibrating on the experimental data of D -meson RAA and v2 in three different collision systems at the Relativistic Heavy-Ion Collidaer (RHIC) and the Large Hadron Collider (LHC): Au-Au collisions at 200 GeV and Pb-Pb collisions at 2.76 and 5.02 TeV. The spatial diffusion coefficient is found to be consistent with lattice QCD calculations and comparable with other models' estimation. We demonstrate the capability of our improved Langevin model to simultaneously describe the RAA and v2 at both RHIC and the LHC energies, as well as the higher order flow coefficient such as D meson v3. We show that by applying a Bayesian analysis, we are able to quantitatively and systematically study the heavy flavor dynamics in heavy-ion collisions.
Superimposed Code Theoretic Analysis of Deoxyribonucleic Acid (DNA) Codes and DNA Computing
2010-01-01
partitioned by font type) of sequences are allowed to be in each position (e.g., Arial = position 0, Comic = position 1, etc. ) and within each collection...movement was modeled by a Brownian motion 3 dimensional random walk. The one dimensional diffusion coefficient D for the ellipsoid shape with 3...temperature, kB is Boltzmann’s constant, and η is the viscosity of the medium. The random walk motion is modeled by assuming the oligo is on a three
Analytic Regularity and Polynomial Approximation of Parametric and Stochastic Elliptic PDEs
2010-05-31
Todor : Finite elements for elliptic problems with stochastic coefficients Comp. Meth. Appl. Mech. Engg. 194 (2005) 205-228. [14] R. Ghanem and P. Spanos...for elliptic partial differential equations with random input data SIAM J. Num. Anal. 46(2008), 2411–2442. [20] R. Todor , Robust eigenvalue computation...for smoothing operators, SIAM J. Num. Anal. 44(2006), 865– 878. [21] Ch. Schwab and R.A. Todor , Karhúnen-Loève Approximation of Random Fields by
Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance
NASA Astrophysics Data System (ADS)
Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra
2017-06-01
In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.
NASA Technical Reports Server (NTRS)
Childs, D. W.
1983-01-01
An improved theory for the prediction of the rotordynamic coefficients of turbulent annular seals was developed. Predictions from the theory are compared to the experimental results and an approach for the direct calculation of empirical turbulent coefficients from test data are introduced. An improved short seal solution is shown to do a better job of calculating effective stiffness and damping coefficients than either the original short seal solution or a finite length solution. However, the original short seal solution does a much better job of predicting equivalent added mass coefficient.
Automatic segmentation of lumbar vertebrae in CT images
NASA Astrophysics Data System (ADS)
Kulkarni, Amruta; Raina, Akshita; Sharifi Sarabi, Mona; Ahn, Christine S.; Babayan, Diana; Gaonkar, Bilwaj; Macyszyn, Luke; Raghavendra, Cauligi
2017-03-01
Lower back pain is one of the most prevalent disorders in the developed/developing world. However, its etiology is poorly understood and treatment is often determined subjectively. In order to quantitatively study the emergence and evolution of back pain, it is necessary to develop consistently measurable markers for pathology. Imaging based measures offer one solution to this problem. The development of imaging based on quantitative biomarkers for the lower back necessitates automated techniques to acquire this data. While the problem of segmenting lumbar vertebrae has been addressed repeatedly in literature, the associated problem of computing relevant biomarkers on the basis of the segmentation has not been addressed thoroughly. In this paper, we propose a Random-Forest based approach that learns to segment vertebral bodies in CT images followed by a biomarker evaluation framework that extracts vertebral heights and widths from the segmentations obtained. Our dataset consists of 15 CT sagittal scans obtained from General Electric Healthcare. Our main approach is divided into three parts: the first stage is image pre-processing which is used to correct for variations in illumination across all the images followed by preparing the foreground and background objects from images; the next stage is Machine Learning using Random-Forests, which distinguishes the interest-point vectors between foreground or background; and the last step is image post-processing, which is crucial to refine the results of classifier. The Dice coefficient was used as a statistical validation metric to evaluate the performance of our segmentations with an average value of 0.725 for our dataset.
Generalized expectation-maximization segmentation of brain MR images
NASA Astrophysics Data System (ADS)
Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.
2006-03-01
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
NASA Astrophysics Data System (ADS)
Wang, Yu; Guo, Yanzhi; Kuang, Qifan; Pu, Xuemei; Ji, Yue; Zhang, Zhihang; Li, Menglong
2015-04-01
The assessment of binding affinity between ligands and the target proteins plays an essential role in drug discovery and design process. As an alternative to widely used scoring approaches, machine learning methods have also been proposed for fast prediction of the binding affinity with promising results, but most of them were developed as all-purpose models despite of the specific functions of different protein families, since proteins from different function families always have different structures and physicochemical features. In this study, we proposed a random forest method to predict the protein-ligand binding affinity based on a comprehensive feature set covering protein sequence, binding pocket, ligand structure and intermolecular interaction. Feature processing and compression was respectively implemented for different protein family datasets, which indicates that different features contribute to different models, so individual representation for each protein family is necessary. Three family-specific models were constructed for three important protein target families of HIV-1 protease, trypsin and carbonic anhydrase respectively. As a comparison, two generic models including diverse protein families were also built. The evaluation results show that models on family-specific datasets have the superior performance to those on the generic datasets and the Pearson and Spearman correlation coefficients ( R p and Rs) on the test sets are 0.740, 0.874, 0.735 and 0.697, 0.853, 0.723 for HIV-1 protease, trypsin and carbonic anhydrase respectively. Comparisons with the other methods further demonstrate that individual representation and model construction for each protein family is a more reasonable way in predicting the affinity of one particular protein family.
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…
On time-dependent diffusion coefficients arising from stochastic processes with memory
NASA Astrophysics Data System (ADS)
Carpio-Bernido, M. Victoria; Barredo, Wilson I.; Bernido, Christopher C.
2017-08-01
Time-dependent diffusion coefficients arise from anomalous diffusion encountered in many physical systems such as protein transport in cells. We compare these coefficients with those arising from analysis of stochastic processes with memory that go beyond fractional Brownian motion. Facilitated by the Hida white noise functional integral approach, diffusion propagators or probability density functions (pdf) are obtained and shown to be solutions of modified diffusion equations with time-dependent diffusion coefficients. This should be useful in the study of complex transport processes.
Elsayed, Mustafa M A; Vierl, Ulrich; Cevc, Gregor
2009-06-01
Potentiometric lipid membrane-water partition coefficient studies neglect electrostatic interactions to date; this leads to incorrect results. We herein show how to account properly for such interactions in potentiometric data analysis. We conducted potentiometric titration experiments to determine lipid membrane-water partition coefficients of four illustrative drugs, bupivacaine, diclofenac, ketoprofen and terbinafine. We then analyzed the results conventionally and with an improved analytical approach that considers Coulombic electrostatic interactions. The new analytical approach delivers robust partition coefficient values. In contrast, the conventional data analysis yields apparent partition coefficients of the ionized drug forms that depend on experimental conditions (mainly the lipid-drug ratio and the bulk ionic strength). This is due to changing electrostatic effects originating either from bound drug and/or lipid charges. A membrane comprising 10 mol-% mono-charged molecules in a 150 mM (monovalent) electrolyte solution yields results that differ by a factor of 4 from uncharged membranes results. Allowance for the Coulombic electrostatic interactions is a prerequisite for accurate and reliable determination of lipid membrane-water partition coefficients of ionizable drugs from potentiometric titration data. The same conclusion applies to all analytical methods involving drug binding to a surface.
NASA Astrophysics Data System (ADS)
Hanasoge, Shravan; Agarwal, Umang; Tandon, Kunj; Koelman, J. M. Vianney A.
2017-09-01
Determining the pressure differential required to achieve a desired flow rate in a porous medium requires solving Darcy's law, a Laplace-like equation, with a spatially varying tensor permeability. In various scenarios, the permeability coefficient is sampled at high spatial resolution, which makes solving Darcy's equation numerically prohibitively expensive. As a consequence, much effort has gone into creating upscaled or low-resolution effective models of the coefficient while ensuring that the estimated flow rate is well reproduced, bringing to the fore the classic tradeoff between computational cost and numerical accuracy. Here we perform a statistical study to characterize the relative success of upscaling methods on a large sample of permeability coefficients that are above the percolation threshold. We introduce a technique based on mode-elimination renormalization group theory (MG) to build coarse-scale permeability coefficients. Comparing the results with coefficients upscaled using other methods, we find that MG is consistently more accurate, particularly due to its ability to address the tensorial nature of the coefficients. MG places a low computational demand, in the manner in which we have implemented it, and accurate flow-rate estimates are obtained when using MG-upscaled permeabilities that approach or are beyond the percolation threshold.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
Lambron, Julien; Rakotonjanahary, Josué; Loisel, Didier; Frampas, Eric; De Carli, Emilie; Delion, Matthieu; Rialland, Xavier; Toulgoat, Frédérique
2016-02-01
Magnetic resonance (MR) images from children with optic pathway glioma (OPG) are complex. We initiated this study to evaluate the accuracy of MR imaging (MRI) interpretation and to propose a simple and reproducible imaging classification for MRI. We randomly selected 140 MRIs from among 510 MRIs performed on 104 children diagnosed with OPG in France from 1990 to 2004. These images were reviewed independently by three radiologists (F.T., 15 years of experience in neuroradiology; D.L., 25 years of experience in pediatric radiology; and J.L., 3 years of experience in radiology) using a classification derived from the Dodge and modified Dodge classifications. Intra- and interobserver reliabilities were assessed using the Bland-Altman method and the kappa coefficient. These reviews allowed the definition of reliable criteria for MRI interpretation. The reviews showed intraobserver variability and large discrepancies among the three radiologists (kappa coefficient varying from 0.11 to 1). These variabilities were too large for the interpretation to be considered reproducible over time or among observers. A consensual analysis, taking into account all observed variabilities, allowed the development of a definitive interpretation protocol. Using this revised protocol, we observed consistent intra- and interobserver results (kappa coefficient varying from 0.56 to 1). The mean interobserver difference for the solid portion of the tumor with contrast enhancement was 0.8 cm(3) (limits of agreement = -16 to 17). We propose simple and precise rules for improving the accuracy and reliability of MRI interpretation for children with OPG. Further studies will be necessary to investigate the possible prognostic value of this approach.
Job Stress among Hispanic Professionals
ERIC Educational Resources Information Center
Rodriguez-Calcagno, Maria; Brewer, Ernest W.
2005-01-01
This study explores job stress among a random sample of 219 Hispanic professionals. Participants complete the Job Stress Survey by Spielberger and Vagg and a demographic questionnaire. Responses are analyzed using descriptive statistics, a factorial analysis of variance, and coefficients of determination. Results indicate that Hispanic…
Deorientation of PolSAR coherency matrix for volume scattering retrieval
NASA Astrophysics Data System (ADS)
Kumar, Shashi; Garg, R. D.; Kushwaha, S. P. S.
2016-05-01
Polarimetric SAR data has proven its potential to extract scattering information for different features appearing in single resolution cell. Several decomposition modelling approaches have been developed to retrieve scattering information from PolSAR data. During scattering power decomposition based on physical scattering models it becomes very difficult to distinguish volume scattering as a result from randomly oriented vegetation from scattering nature of oblique structures which are responsible for double-bounce and volume scattering , because both are decomposed in same scattering mechanism. The polarization orientation angle (POA) of an electromagnetic wave is one of the most important character which gets changed due to scattering from geometrical structure of topographic slopes, oriented urban area and randomly oriented features like vegetation cover. The shift in POA affects the polarimetric radar signatures. So, for accurate estimation of scattering nature of feature compensation in polarization orientation shift becomes an essential procedure. The prime objective of this work was to investigate the effect of shift in POA in scattering information retrieval and to explore the effect of deorientation on regression between field-estimated aboveground biomass (AGB) and volume scattering. For this study Dudhwa National Park, U.P., India was selected as study area and fully polarimetric ALOS PALSAR data was used to retrieve scattering information from the forest area of Dudhwa National Park. Field data for DBH and tree height was collect for AGB estimation using stratified random sampling. AGB was estimated for 170 plots for different locations of the forest area. Yamaguchi four component decomposition modelling approach was utilized to retrieve surface, double-bounce, helix and volume scattering information. Shift in polarization orientation angle was estimated and deorientation of coherency matrix for compensation of POA shift was performed. Effect of deorientation on RGB color composite for the forest area can be easily seen. Overestimation of volume scattering and under estimation of double bounce scattering was recorded for PolSAR decomposition without deorientation and increase in double bounce scattering and decrease in volume scattering was noticed after deorientation. This study was mainly focused on volume scattering retrieval and its relation with field estimated AGB. Change in volume scattering after POA compensation of PolSAR data was recorded and a comparison was performed on volume scattering values for all the 170 forest plots for which field data were collected. Decrease in volume scattering after deorientation was noted for all the plots. Regression between PolSAR decomposition based volume scattering and AGB was performed. Before deorientation, coefficient determination (R2) between volume scattering and AGB was 0.225. After deorientation an improvement in coefficient of determination was found and the obtained value was 0.613. This study recommends deorientation of PolSAR data for decomposition modelling to retrieve reliable volume scattering information from forest area.
Calculation of open and closed system elastic coefficients for multicomponent solids
NASA Astrophysics Data System (ADS)
Mishin, Y.
2015-06-01
Thermodynamic equilibrium in multicomponent solids subject to mechanical stresses is a complex nonlinear problem whose exact solution requires extensive computations. A few decades ago, Larché and Cahn proposed a linearized solution of the mechanochemical equilibrium problem by introducing the concept of open system elastic coefficients [Acta Metall. 21, 1051 (1973), 10.1016/0001-6160(73)90021-7]. Using the Ni-Al solid solution as a model system, we demonstrate that open system elastic coefficients can be readily computed by semigrand canonical Monte Carlo simulations in conjunction with the shape fluctuation approach. Such coefficients can be derived from a single simulation run, together with other thermodynamic properties needed for prediction of compositional fields in solid solutions containing defects. The proposed calculation approach enables streamlined solutions of mechanochemical equilibrium problems in complex alloys. Second order corrections to the linear theory are extended to multicomponent systems.
Zehnder, Pascal; Roth, Beat; Burkhard, Fiona C; Kessler, Thomas M
2008-09-01
We determined and compared urethral pressure measurements using air charged and microtip catheters in a prospective, single-blind, randomized trial. A consecutive series of 64 women referred for urodynamic investigation underwent sequential urethral pressure measurements using an air charged and a microtip catheter in randomized order. Patients were blinded to the type and sequence of catheter used. Agreement between the 2 catheter systems was assessed using the Bland and Altman 95% limits of agreement method. Intraclass correlation coefficients of air charged and microtip catheters for maximum urethral closure pressure at rest were 0.97 and 0.93, and for functional profile length they were 0.9 and 0.78, respectively. Pearson's correlation coefficients and Lin's concordance coefficients of air charged and microtip catheters were r = 0.82 and rho = 0.79 for maximum urethral closure pressure at rest, and r = 0.73 and rho = 0.7 for functional profile length, respectively. When applying the Bland and Altman method, air charged catheters gave higher readings than microtip catheters for maximum urethral closure pressure at rest (mean difference 7.5 cm H(2)O) and functional profile length (mean difference 1.8 mm). There were wide 95% limits of agreement for differences in maximum urethral closure pressure at rest (-24.1 to 39 cm H(2)O) and functional profile length (-7.7 to 11.3 mm). For urethral pressure measurement the air charged catheter is at least as reliable as the microtip catheter and it generally gives higher readings. However, air charged and microtip catheters cannot be used interchangeably for clinical purposes because of insufficient agreement. Hence, clinicians should be aware that air charged and microtip catheters may yield completely different results, and these differences should be acknowledged during clinical decision making.
NASA Astrophysics Data System (ADS)
Yushkanov, A. A.; Zverev, N. V.
2018-03-01
An influence of quantum and spatial dispersion properties of the non-degenerate electron plasma on the interaction of electromagnetic P-waves with one-dimensional photonic crystal consisting of conductor with low carrier electron density and transparent dielectric matter, is studied numerically. It is shown that at the frequencies of order of the plasma frequency and at small widths of the conducting and dielectric layers of the photonic crystal, optical coefficients in the quantum non-degenerate plasma approach differ from the coefficients in the classical electron gas approach. And also, at these frequencies one observes a temperature dependence of the optical coefficients.
Duong, Minh V; Nguyen, Hieu T; Mai, Tam V-T; Huynh, Lam K
2018-01-03
Master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) has shown to be a powerful framework for modeling kinetic and dynamic behaviors of a complex gas-phase chemical system on a complicated multiple-species and multiple-channel potential energy surface (PES) for a wide range of temperatures and pressures. Derived from the ME time-resolved species profiles, the macroscopic or phenomenological rate coefficients are essential for many reaction engineering applications including those in combustion and atmospheric chemistry. Therefore, in this study, a least-squares-based approach named Global Minimum Profile Error (GMPE) was proposed and implemented in the MultiSpecies-MultiChannel (MSMC) code (Int. J. Chem. Kinet., 2015, 47, 564) to extract macroscopic rate coefficients for such a complicated system. The capability and limitations of the new approach were discussed in several well-defined test cases.
Detection of shifted double JPEG compression by an adaptive DCT coefficient model
NASA Astrophysics Data System (ADS)
Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua
2014-12-01
In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.
NASA Astrophysics Data System (ADS)
Gonzales, Matthew Alejandro
The calculation of the thermal neutron Doppler temperature reactivity feedback co-efficient, a key parameter in the design and safe operation of advanced reactors, using first order perturbation theory in continuous energy Monte Carlo codes is challenging as the continuous energy adjoint flux is not readily available. Traditional approaches of obtaining the adjoint flux attempt to invert the random walk process as well as require data corresponding to all temperatures and their respective temperature derivatives within the system in order to accurately calculate the Doppler temperature feedback. A new method has been developed using adjoint-weighted tallies and On-The-Fly (OTF) generated continuous energy cross sections within the Monte Carlo N-Particle (MCNP6) transport code. The adjoint-weighted tallies are generated during the continuous energy k-eigenvalue Monte Carlo calculation. The weighting is based upon the iterated fission probability interpretation of the adjoint flux, which is the steady state population in a critical nuclear reactor caused by a neutron introduced at that point in phase space. The adjoint-weighted tallies are produced in a forward calculation and do not require an inversion of the random walk. The OTF cross section database uses a high order functional expansion between points on a user-defined energy-temperature mesh in which the coefficients with respect to a polynomial fitting in temperature are stored. The coefficients of the fits are generated before run- time and called upon during the simulation to produce cross sections at any given energy and temperature. The polynomial form of the OTF cross sections allows the possibility of obtaining temperature derivatives of the cross sections on-the-fly. The use of Monte Carlo sampling of adjoint-weighted tallies and the capability of computing derivatives of continuous energy cross sections with respect to temperature are used to calculate the Doppler temperature coefficient in a research version of MCNP6. Temperature feedback results from the cross sections themselves, changes in the probability density functions, as well as changes in the density of the materials. The focus of this work is specific to the Doppler temperature feedback which result from Doppler broadening of cross sections as well as changes in the probability density function within the scattering kernel. This method is compared against published results using Mosteller's numerical benchmark to show accurate evaluations of the Doppler temperature coefficient, fuel assembly calculations, and a benchmark solution based on the heavy gas model for free-gas elastic scattering. An infinite medium benchmark for neutron free gas elastic scattering for large scattering ratios and constant absorption cross section has been developed using the heavy gas model. An exact closed form solution for the neutron energy spectrum is obtained in terms of the confluent hypergeometric function and compared against spectra for the free gas scattering model in MCNP6. Results show a quick increase in convergence of the analytic energy spectrum to the MCNP6 code with increasing target size, showing absolute relative differences of less than 5% for neutrons scattering with carbon. The analytic solution has been generalized to accommodate piecewise constant in energy absorption cross section to produce temperature feedback. Results reinforce the constraints in which heavy gas theory may be applied resulting in a significant target size to accommodate increasing cross section structure. The energy dependent piecewise constant cross section heavy gas model was used to produce a benchmark calculation of the Doppler temperature coefficient to show accurate calculations when using the adjoint-weighted method. Results show the Doppler temperature coefficient using adjoint weighting and cross section derivatives accurately obtains the correct solution within statistics as well as reduce computer runtimes by a factor of 50.
Oscillations and chaos in neural networks: an exactly solvable model.
Wang, L P; Pichler, E E; Ross, J
1990-01-01
We consider a randomly diluted higher-order network with noise, consisting of McCulloch-Pitts neurons that interact by Hebbian-type connections. For this model, exact dynamical equations are derived and solved for both parallel and random sequential updating algorithms. For parallel dynamics, we find a rich spectrum of different behaviors including static retrieving and oscillatory and chaotic phenomena in different parts of the parameter space. The bifurcation parameters include first- and second-order neuronal interaction coefficients and a rescaled noise level, which represents the combined effects of the random synaptic dilution, interference between stored patterns, and additional background noise. We show that a marked difference in terms of the occurrence of oscillations or chaos exists between neural networks with parallel and random sequential dynamics. Images PMID:2251287
Universality and chaotic dynamics in reactive scattering of ultracold KRb molecules with K atoms
NASA Astrophysics Data System (ADS)
Li, Ming; Makrides, Constantinos; Petrov, Alexander; Kotochigova, Svetlana; Croft, James F. E.; Balakrishnan, Naduvalath; Kendrick, Brian K.
2017-04-01
We study the benchmark reaction between the most-celebrated ultracold polar molecule, KRb, with an ultracold K atom. For the first time we map out an accurate ab initio ground potential energy surface of the K2Rb complex in full dimensionality and performed a numerically exact quantum-mechanical calculation of reaction dynamics based on coupled-channels approach in hyperspherical coordinates. An analysis of the adiabatic hyperspherical potentials reveals a chaotic distribution for the short-range complex that plays a key role in governing the reaction outcome. The equivalent distribution for a lighter collisional system with a smaller density of states (here the Li2Yb trimer) only shows random behavior. We find an extreme sensitivity of our chaotic system to a small perturbation associated with the weak non-additive three-body potential contribution that does not affect the total reaction rate coefficient but leads to a significant change in the rotational distribution in the product molecule. In both cases the distribution of these rates is random or Poissonian. This work was supported in part by NSF Grant PHY-1505557 (N.B.) and PHY-1619788 (S.K.), ARO MURI Grant No. W911NF-12-1-0476 (N.B. & S.K.), and DOE LDRD Grant No. 20170221ER (B.K.).
Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures
NASA Astrophysics Data System (ADS)
Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain
2018-02-01
Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.
Chu, Hui-May; Ette, Ene I
2005-09-02
his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models
Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.
2016-01-01
Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906
Cross over of recurrence networks to random graphs and random geometric graphs
NASA Astrophysics Data System (ADS)
Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.
2017-02-01
Recurrence networks are complex networks constructed from the time series of chaotic dynamical systems where the connection between two nodes is limited by the recurrence threshold. This condition makes the topology of every recurrence network unique with the degree distribution determined by the probability density variations of the representative attractor from which it is constructed. Here we numerically investigate the properties of recurrence networks from standard low-dimensional chaotic attractors using some basic network measures and show how the recurrence networks are different from random and scale-free networks. In particular, we show that all recurrence networks can cross over to random geometric graphs by adding sufficient amount of noise to the time series and into the classical random graphs by increasing the range of interaction to the system size. We also highlight the effectiveness of a combined plot of characteristic path length and clustering coefficient in capturing the small changes in the network characteristics.
Geopotential coefficient determination and the gravimetric boundary value problem: A new approach
NASA Technical Reports Server (NTRS)
Sjoeberg, Lars E.
1989-01-01
New integral formulas to determine geopotential coefficients from terrestrial gravity and satellite altimetry data are given. The formulas are based on the integration of data over the non-spherical surface of the Earth. The effect of the topography to low degrees and orders of coefficients is estimated numerically. Formulas for the solution of the gravimetric boundary value problem are derived.
Measuring Developmental Students' Mathematics Anxiety
ERIC Educational Resources Information Center
Ding, Yanqing
2016-01-01
This study conducted an item-level analysis of mathematics anxiety and examined the dimensionality of mathematics anxiety in a sample of developmental mathematics students (N = 162) by Multi-dimensional Random Coefficients Multinominal Logit Model (MRCMLM). The results indicate a moderately correlated factor structure of mathematics anxiety (r =…
OCT Amplitude and Speckle Statistics of Discrete Random Media.
Almasian, Mitra; van Leeuwen, Ton G; Faber, Dirk J
2017-11-01
Speckle, amplitude fluctuations in optical coherence tomography (OCT) images, contains information on sub-resolution structural properties of the imaged sample. Speckle statistics could therefore be utilized in the characterization of biological tissues. However, a rigorous theoretical framework relating OCT speckle statistics to structural tissue properties has yet to be developed. As a first step, we present a theoretical description of OCT speckle, relating the OCT amplitude variance to size and organization for samples of discrete random media (DRM). Starting the calculations from the size and organization of the scattering particles, we analytically find expressions for the OCT amplitude mean, amplitude variance, the backscattering coefficient and the scattering coefficient. We assume fully developed speckle and verify the validity of this assumption by experiments on controlled samples of silica microspheres suspended in water. We show that the OCT amplitude variance is sensitive to sub-resolution changes in size and organization of the scattering particles. Experimentally determined and theoretically calculated optical properties are compared and in good agreement.
Perrone, T M; Gonzatti, M I; Villamizar, G; Escalante, A; Aso, P M
2009-05-12
Nine Trypanosoma sp. Venezuelan isolates, initially presumed to be T. evansi, were collected from three different hosts, capybara (Apure state), horse (Apure state) and donkey (Guarico state) and compared by the random amplification polymorphic DNA technique (RAPD). Thirty-one to 46 reproducible fragments were obtained with 12 of the 40 primers that were used. Most of the primers detected molecular profiles with few polymorphisms between the seven horse, capybara and donkey isolates. Quantitative analyses of the RAPD profiles of these isolates revealed a high degree of genetic conservation with similarity coefficients between 85.7% and 98.5%. Ten of the primers generated polymorphic RAPD profiles with two of the three Trypanosoma sp. horse isolates, namely TeAp-N/D1 and TeGu-N/D1. The similarity coefficient between these two isolates and the rest, ranged from 57.9% to 68.4% and the corresponding dendrogram clustered TeAp-N/D1 and Te Gu-N/D1 in a genetically distinct group.
Active motion assisted by correlated stochastic torques.
Weber, Christian; Radtke, Paul K; Schimansky-Geier, Lutz; Hänggi, Peter
2011-07-01
The stochastic dynamics of an active particle undergoing a constant speed and additionally driven by an overall fluctuating torque is investigated. The random torque forces are expressed by a stochastic differential equation for the angular dynamics of the particle determining the orientation of motion. In addition to a constant torque, the particle is supplemented by random torques, which are modeled as an Ornstein-Uhlenbeck process with given correlation time τ(c). These nonvanishing correlations cause a persistence of the particles' trajectories and a change of the effective spatial diffusion coefficient. We discuss the mean square displacement as a function of the correlation time and the noise intensity and detect a nonmonotonic dependence of the effective diffusion coefficient with respect to both correlation time and noise strength. A maximal diffusion behavior is obtained if the correlated angular noise straightens the curved trajectories, interrupted by small pirouettes, whereby the correlated noise amplifies a straightening of the curved trajectories caused by the constant torque.
Magnetic orientation of nontronite clay in aqueous dispersions and its effect on water diffusion.
Abrahamsson, Christoffer; Nordstierna, Lars; Nordin, Matias; Dvinskikh, Sergey V; Nydén, Magnus
2015-01-01
The diffusion rate of water in dilute clay dispersions depends on particle concentration, size, shape, aggregation and water-particle interactions. As nontronite clay particles magnetically align parallel to the magnetic field, directional self-diffusion anisotropy can be created within such dispersion. Here we study water diffusion in exfoliated nontronite clay dispersions by diffusion NMR and time-dependant 1H-NMR-imaging profiles. The dispersion clay concentration was varied between 0.3 and 0.7 vol%. After magnetic alignment of the clay particles in these dispersions a maximum difference of 20% was measured between the parallel and perpendicular self-diffusion coefficients in the dispersion with 0.7 vol% clay. A method was developed to measure water diffusion within the dispersion in the absence of a magnetic field (random clay orientation) as this is not possible with standard diffusion NMR. However, no significant difference in self-diffusion coefficient between random and aligned dispersions could be observed. Copyright © 2014 Elsevier Inc. All rights reserved.
Saif-Ur-Rahman, K. M.; Parvin, Tahmina; Bhuyian, Sazzadul Islam; Zohura, Fatema; Begum, Farzana; Rashid, Mahamud-Ur; Biswas, Shwapon Kumar; Sack, David; Sack, R. Bradley; Monira, Shirajum; Alam, Munirul; Shaly, Nusrat Jahan; George, Christine Marie
2016-01-01
Previous studies have demonstrated that household contacts of cholera patients are highly susceptible to cholera infections for a 7-day period after the presentation of the index patient in the hospital. However, there is no standard of care to prevent cholera transmission in this high-risk population. Furthermore, there is limited information available on awareness of cholera transmission and prevention among cholera patients and their household contacts. To initiate a standard of care for this high-risk population, we developed the Cholera-Hospital-Based-Intervention-for-7-Days (CHoBI7), which delivers a handwashing with soap and water treatment intervention to household contacts during the time they spend with the admitted cholera patient in the hospital and reinforces these messages through home visits. To test CHoBI7, we conducted a randomized controlled trial among 302 intervention cholera patient household members and 302 control cholera patient household members in Dhaka, Bangladesh. In this study, we evaluated the effectiveness of the CHoBI7 intervention in increasing awareness of cholera transmission and prevention, and the key times for handwashing with soap. We observed a significant increase in cholera knowledge score in the intervention arm compared with the control arm at both the 1-week follow-up {score coefficient = 2.34 (95% confidence interval [CI] = 1.96, 2.71)} and 6 to 12-month follow-up period (score coefficient = 1.59 [95% CI = 1.05, 2.13]). This 1-week hospital- and home-based intervention led to a significant increase in knowledge of cholera transmission and prevention which was sustained 6 to 12 months post-intervention. These findings suggest that the CHoBI7 intervention presents a promising approach to increase cholera awareness among this high-risk population. PMID:27799644