Kv7 channels regulate pairwise spiking covariability in health and disease.
Ocker, Gabriel Koch; Doiron, Brent
2014-07-15
Low-threshold M currents are mediated by the Kv7 family of potassium channels. Kv7 channels are important regulators of spiking activity, having a direct influence on the firing rate, spike time variability, and filter properties of neurons. How Kv7 channels affect the joint spiking activity of populations of neurons is an important and open area of study. Using a combination of computational simulations and analytic calculations, we show that the activation of Kv7 conductances reduces the covariability between spike trains of pairs of neurons driven by common inputs. This reduction is beyond that explained by the lowering of firing rates and involves an active cancellation of common fluctuations in the membrane potentials of the cell pair. Our theory shows that the excess covariance reduction is due to a Kv7-induced shift from low-pass to band-pass filtering of the single neuron spike train response. Dysfunction of Kv7 conductances is related to a number of neurological diseases characterized by both elevated firing rates and increased network-wide correlations. We show how changes in the activation or strength of Kv7 conductances give rise to excess correlations that cannot be compensated for by synaptic scaling or homeostatic modulation of passive membrane properties. In contrast, modulation of Kv7 activation parameters consistent with pharmacological treatments for certain hyperactivity disorders can restore normal firing rates and spiking correlations. Our results provide key insights into how regulation of a ubiquitous potassium channel class can control the coordination of population spiking activity.
Phillips, Christopher; García-Magariños, Manuel; Salas, Antonio; Carracedo, Angel; Lareu, Maria Victoria
2012-06-01
BACKGROUND: Genetic tests for kinship testing routinely reach likelihoods that provide virtual proof of the claimed relationship by typing microsatellites-commonly consisting of 12-15 standard forensic short tandem repeats (STRs). Single nucleotide polymorphisms (SNPs) have also been applied to kinship testing but these binary markers are required in greater numbers than multiple-allele STRs. However SNPs offer certain advantageous characteristics not found in STRs, including, much higher mutational stability, good performance typing highly degraded DNA, and the ability to be readily up-scaled to very high marker numbers reaching over a million loci. This article outlines kinship testing applications where SNPs markedly improve the genetic data obtained. In particular we explore the minimum number of SNPs that will be required to confirm pairwise relationship claims in deficient pedigrees that typify missing persons' identification or war grave investigations where commonly few surviving relatives are available for comparison and the DNA is highly degraded. METHODS: We describe the application of SNPs alongside STRs when incomplete profiles or allelic instability in STRs create ambiguous results, we review the use of high density SNP arrays when the relationship claim is very distant, and we outline simulations of kinship analyses with STRs supplemented with SNPs in order to estimate the practical limit of pairwise relationships that can be differentiated from random unrelated pairs from the same population. RESULTS: The minimum number of SNPs for robust statistical inference of parent-offspring relationships through to those of second cousins (S-3-3) is estimated for both simple, single multiplex SNP sets and for subsets of million-SNP arrays. CONCLUSIONS: There is considerable scope for resolving ambiguous STR results and for improving the statistical power of kinship analysis by adding small-scale SNP sets but where the pedigree is deficient the pairwise
Online Pairwise Learning Algorithms.
Ying, Yiming; Zhou, Ding-Xuan
2016-04-01
Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.
NASA Technical Reports Server (NTRS)
Verber, Carl M. (Inventor); Kenan, Richard P. (Inventor)
1982-01-01
Apparatus for comparing first and second sets of voltages having one-to-one correspondence, and providing an indication responsive to the magnitudes of the pairwise differences of the voltages; typically comprising a plurality of channel waveguides (11), each having a first electrode (12) on one side of the channel (11) and a second electrode (13) on the opposite side of the channel (11); contacts (14) and conductors (16) for connecting each voltage of the first set to the first electrode (12) of one waveguide (11); contacts (b 15) and conductors (17) for connecting each voltage of the second set to the second electrode (13) of the waveguide (11) to the first electrode (12) of which the corresponding voltage of the first set is connected; a coupling prism (18), a beam splitter (19), and a waveguide portion (20) for directing to the input end (21,22) of each waveguide (11) a substantially plane wave of coherent light (as indicated at 23,24,25) having predetermined relative intensity and phase; and a detector (51) and associated circuitry (52,58) responsive to the light emerging (as indicated at 26) from the output end (28,29) of the waveguides (11) (via a beam splitter 44 and a coupling prism 45) for providing an indication that is a function of the pairwise relative magnitudes of the first set of voltages and the second set of voltages.
Earth Observing System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
Hybrid pairwise likelihood analysis of animal behavior experiments.
Cattelan, Manuela; Varin, Cristiano
2013-12-01
The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons.
A pairwise interaction model for multivariate functional and longitudinal data.
Chiou, Jeng-Min; Müller, Hans-Georg
2016-06-01
Functional data vectors consisting of samples of multivariate data where each component is a random function are encountered increasingly often but have not yet been comprehensively investigated. We introduce a simple pairwise interaction model that leads to an interpretable and straightforward decomposition of multivariate functional data and of their variation into component-specific processes and pairwise interaction processes. The latter quantify the degree of pairwise interactions between the components of the functional data vectors, while the component-specific processes reflect the functional variation of a particular functional vector component that cannot be explained by the other components. Thus the proposed model provides an extension of the usual notion of a covariance or correlation matrix for multivariate vector data to functional data vectors and generates an interpretable functional interaction map. The decomposition provided by the model can also serve as a basis for subsequent analysis, such as study of the network structure of functional data vectors. The decomposition of the total variance into componentwise and interaction contributions can be quantified by an [Formula: see text]-like decomposition. We provide consistency results for the proposed methods and illustrate the model by applying it to sparsely sampled longitudinal data from the Baltimore Longitudinal Study of Aging, examining the relationships between body mass index and blood fats.
Optimal Arrangement of Components Via Pairwise Rearrangements.
1987-10-01
reliability function under component pairwise rearrangement. They use this property to find the optimal component arrangement. Worked examples illustrate the methods proposed. Keywords: Optimization; Permutations; Nodes.
The intraclass covariance matrix.
Carey, Gregory
2005-09-01
Introduced by C.R. Rao in 1945, the intraclass covariance matrix has seen little use in behavioral genetic research, despite the fact that it was developed to deal with family data. Here, I reintroduce this matrix, and outline its estimation and basic properties for data sets on pairs of relatives. The intraclass covariance matrix is appropriate whenever the research design or mathematical model treats the ordering of the members of a pair as random. Because the matrix has only one estimate of a population variance and covariance, both the observed matrix and the residual matrix from a fitted model are easy to inspect visually; there is no need to mentally average homologous statistics. Fitting a model to the intraclass matrix also gives the same log likelihood, likelihood-ratio (LR) chi2, and parameter estimates as fitting that model to the raw data. A major advantage of the intraclass matrix is that only two factors influence the LR chi2--the sampling error in estimating population parameters and the discrepancy between the model and the observed statistics. The more frequently used interclass covariance matrix adds a third factor to the chi2--sampling error of homologous statistics. Because of this, the degrees of freedom for fitting models to an intraclass matrix differ from fitting that model to an interclass matrix. Future research is needed to establish differences in power-if any--between the interclass and the intraclass matrix.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Combining Multiple Pairwise Structure-based Alignments
2014-11-12
CombAlign is a new Python code that generates a gapped, one-to-many, multiple structure-based sequence alignment(MSSA) given a set of pairwise structure-based alignments. In order to better define regions of similarity among related protein structures, it is useful to detect the residue-residue correspondences among a set of pairwise structure alignments. Few codes exist for constructing a one-to-many, multiple sequence alignment derived from a set of structure alignments, and we perceived a need for creating a new tool for combing pairwise structure alignments that would allow for insertion of gaps in the reference structure.
Doctoral Program Selection Using Pairwise Comparisons.
ERIC Educational Resources Information Center
Tadisina, Suresh K.; Bhasin, Vijay
1989-01-01
The application of a pairwise comparison methodology (Saaty's Analytic Hierarchy Process) to the doctoral program selection process is illustrated. A hierarchy for structuring and facilitating the doctoral program selection decision is described. (Author/MLW)
Market structure explained by pairwise interactions
NASA Astrophysics Data System (ADS)
Bury, Thomas
2013-03-01
Financial markets are a typical example of complex systems where interactions between constituents lead to many remarkable features. Here we give empirical evidence, by making as few assumptions as possible, that the market microstructure capturing almost all of the available information in the data of stock markets does not involve higher order than pairwise interactions. We give an economic interpretation of this pairwise model. We show that it accurately recovers the empirical correlation coefficients; thus the collective behaviors are quantitatively described by models that capture the observed pairwise correlations but no higher-order interactions. Furthermore, we show that an order-disorder transition occurs, as predicted by the pairwise model. Last, we make the link with the graph-theoretic description of stock markets recovering the non-random and scale-free topology, shrinking length during crashes and meaningful clustering features, as expected.
Statistical Physics of Pairwise Probability Models
Roudi, Yasser; Aurell, Erik; Hertz, John A.
2009-01-01
Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the mean values and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models. PMID:19949460
Pairwise constrained concept factorization for data representation.
He, Yangcheng; Lu, Hongtao; Huang, Lei; Xie, Saining
2014-04-01
Concept factorization (CF) is a variant of non-negative matrix factorization (NMF). In CF, each concept is represented by a linear combination of data points, and each data point is represented by a linear combination of concepts. More specifically, each concept is represented by more than one data point with different weights, and each data point carries various weights called membership to represent their degrees belonging to that concept. However, CF is actually an unsupervised method without making use of prior information of the data. In this paper, we propose a novel semi-supervised concept factorization method, called Pairwise Constrained Concept Factorization (PCCF), which incorporates pairwise constraints into the CF framework. We expect that data points which have pairwise must-link constraints should have the same class label as much as possible, while data points with pairwise cannot-link constraints will have different class labels as much as possible. Due to the incorporation of the pairwise constraints, the learning quality of the CF has been significantly enhanced. Experimental results show the effectiveness of our proposed novel method in comparison to the state-of-the-art algorithms on several real world applications.
PAIRWISE BLENDING OF HIGH LEVEL WASTE (HLW)
CERTA, P.J.
2006-02-22
The primary objective of this study is to demonstrate a mission scenario that uses pairwise and incidental blending of high level waste (HLW) to reduce the total mass of HLW glass. Secondary objectives include understanding how recent refinements to the tank waste inventory and solubility assumptions affect the mass of HLW glass and how logistical constraints may affect the efficacy of HLW blending.
Pairwise Document Classification for Relevance Feedback
2009-11-01
Pairwise Document Classification for Relevance Feedback Jonathan L. Elsas, Pinar Donmez, Jamie Callan, Jaime G. Carbonell Language Technologies...Collins-Thompson and J. Callan. Query expansion using random walk models. In CIKM ’05, page 711. ACM, 2005. [5] P. Donmez and J. Carbonell . Paired
Tukey-Like Pairwise Comparisons among Proportions.
ERIC Educational Resources Information Center
Williams, Richard H.
1992-01-01
A QuickBASIC microcomputer program for conducting Tukey-like pairwise comparisons on "k" independent sample proportions is described. The program can accommodate applications involving equal or unequal sample sizes. Studentized range values are computed and displayed on a computer monitor, each of which represents a simple comparison…
Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z
2015-11-01
Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity.
Brier, Matthew R.; Mitra, Anish; McCarthy, John E.; Ances, Beau M.; Snyder, Abraham Z.
2015-01-01
Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. PMID:26208872
ADD: How Does It Add Up in the Classroom?
ERIC Educational Resources Information Center
Sumpter, R. David; Kidd, Libby
1998-01-01
Gives a short history of attention-deficit disorder (ADD), describes characteristics of ADD, discusses a four-step plan for identification of ADD, and presents four types of management techniques: medical, environmental, classroom activity, and behavioral management. Stresses importance of cooperation and communication among teacher, parent, and…
Pairwise network information and nonlinear correlations
NASA Astrophysics Data System (ADS)
Martin, Elliot A.; Hlinka, Jaroslav; Davidsen, Jörn
2016-10-01
Reconstructing the structural connectivity between interacting units from observed activity is a challenge across many different disciplines. The fundamental first step is to establish whether or to what extent the interactions between the units can be considered pairwise and, thus, can be modeled as an interaction network with simple links corresponding to pairwise interactions. In principle, this can be determined by comparing the maximum entropy given the bivariate probability distributions to the true joint entropy. In many practical cases, this is not an option since the bivariate distributions needed may not be reliably estimated or the optimization is too computationally expensive. Here we present an approach that allows one to use mutual informations as a proxy for the bivariate probability distributions. This has the advantage of being less computationally expensive and easier to estimate. We achieve this by introducing a novel entropy maximization scheme that is based on conditioning on entropies and mutual informations. This renders our approach typically superior to other methods based on linear approximations. The advantages of the proposed method are documented using oscillator networks and a resting-state human brain network as generic relevant examples.
Pairwise network information and nonlinear correlations.
Martin, Elliot A; Hlinka, Jaroslav; Davidsen, Jörn
2016-10-01
Reconstructing the structural connectivity between interacting units from observed activity is a challenge across many different disciplines. The fundamental first step is to establish whether or to what extent the interactions between the units can be considered pairwise and, thus, can be modeled as an interaction network with simple links corresponding to pairwise interactions. In principle, this can be determined by comparing the maximum entropy given the bivariate probability distributions to the true joint entropy. In many practical cases, this is not an option since the bivariate distributions needed may not be reliably estimated or the optimization is too computationally expensive. Here we present an approach that allows one to use mutual informations as a proxy for the bivariate probability distributions. This has the advantage of being less computationally expensive and easier to estimate. We achieve this by introducing a novel entropy maximization scheme that is based on conditioning on entropies and mutual informations. This renders our approach typically superior to other methods based on linear approximations. The advantages of the proposed method are documented using oscillator networks and a resting-state human brain network as generic relevant examples.
ERIC Educational Resources Information Center
Armstrong, Thomas
1996-01-01
Questions the existence of attention deficit disorder (ADD), a commonly diagnosed "disease" based on behavioral characteristics. There may be no medical or physiological basis for ADD. The National Association of School Psychologists deplores labeling children and creating categories of exclusion. Instead, educators should respond to individual…
Galaxy Pairwise Velocity Distributions on Nonlinear Scales
NASA Astrophysics Data System (ADS)
Diaferio, Antonaldo; Geller, Margaret J.
1996-08-01
The redshift-space correlation function ξ_s_ for projected galaxy separations <~ 1 h^-1^ Mpc can be expressed as the convolution of the real-space correlation function with the galaxy pairwise velocity distribution function (PVDF). An exponential PVDF yields the best fit to the ξ_s_ measured from galaxy samples of different redshift surveys. We show that this exponential PVDF is not merely a fitting function but arises from well-defined gravitational processes. Two ingredients conspire to yield a PVDF with a nearly exponential shape: (1) the number density n(σ) of systems with velocity dispersion σ and (2) the unrelaxed dynamical state of most galaxy systems. The former ingredient determines the exponential tail, and the latter determines the central peak of the PVDF. We examine a third issue: the transfer of orbital kinetic energy to galaxy internal degrees of freedom. Although this effect is of secondary importance for the PVDF exponential shape, it is detectable in galaxy groups, which indicates that galaxy merging is an ongoing process in the present universe. We compare the ξ_s_ measured on nonlinear scales from galaxy samples of the Center for Astrophysics redshift surveys with different models of the PVDF convolved with the measured real-space correlation function. This preliminary comparison indicates that the agreement between model and observations depends strongly on both the underlying cosmological model and the internal dynamics of galaxy systems. Neither parameter dominates. Moreover, the agreement depends sensitively on the accuracy of the galaxy position and velocity measurements. We expect that ξ_s_ will pose further constraints on the model of the universe and will improve the knowledge of the dynamics of galaxy systems on very small scales if we improve (1) the galaxy coordinate determination and (2) the measurement of relative velocities of galaxies with small projected separation. In fact, the redshift-space correlation function
Disequilibrium mapping: Composite likelihood for pairwise disequilibrium
Devlin, B.; Roeder, K.; Risch, N.
1996-08-15
The pattern of linkage disequilibrium between a disease locus and a set of marker loci has been shown to be a useful tool for geneticists searching for disease genes. Several methods have been advanced to utilize the pairwise disequilibrium between the disease locus and each of a set of marker loci. However, none of the methods take into account the information from all pairs simultaneously while also modeling the variability in the disequilibrium values due to the evolutionary dynamics of the population. We propose a Composite Likelihood CL model that has these features when the physical distances between the marker loci are known or can be approximated. In this instance, and assuming that there is a single disease mutation, the CL model depends on only three parameters, the recombination fraction between the disease locus and an arbitrary marker locus, {theta}, the age of the mutation, and a variance parameter. When the CL is maximized over a grid of {theta}, it provides a graph that can direct the search for the disease locus. We also show how the CL model can be generalized to account for multiple disease mutations. Evolutionary simulations demonstrate the power of the analyses, as well as their potential weaknesses. Finally, we analyze the data from two mapped diseases, cystic fibrosis and diastrophic dysplasia, finding that the CL method performs well in both cases. 28 refs., 6 figs., 4 tabs.
Statistical pairwise interaction model of stock market
NASA Astrophysics Data System (ADS)
Bury, Thomas
2013-03-01
Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent-based models with rules designed in order to recover some empirical behaviors. Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.
Galilean covariant harmonic oscillator
NASA Technical Reports Server (NTRS)
Horzela, Andrzej; Kapuscik, Edward
1993-01-01
A Galilean covariant approach to classical mechanics of a single particle is described. Within the proposed formalism, all non-covariant force laws defining acting forces which become to be defined covariantly by some differential equations are rejected. Such an approach leads out of the standard classical mechanics and gives an example of non-Newtonian mechanics. It is shown that the exactly solvable linear system of differential equations defining forces contains the Galilean covariant description of harmonic oscillator as its particular case. Additionally, it is demonstrated that in Galilean covariant classical mechanics the validity of the second Newton law of dynamics implies the Hooke law and vice versa. It is shown that the kinetic and total energies transform differently with respect to the Galilean transformations.
Lin, Nan Xuan; Henley, William Edward
2016-12-10
Observational studies provide a rich source of information for assessing effectiveness of treatment interventions in many situations where it is not ethical or practical to perform randomized controlled trials. However, such studies are prone to bias from hidden (unmeasured) confounding. A promising approach to identifying and reducing the impact of unmeasured confounding is prior event rate ratio (PERR) adjustment, a quasi-experimental analytic method proposed in the context of electronic medical record database studies. In this paper, we present a statistical framework for using a pairwise approach to PERR adjustment that removes bias inherent in the original PERR method. A flexible pairwise Cox likelihood function is derived and used to demonstrate the consistency of the simple and convenient alternative PERR (PERR-ALT) estimator. We show how to estimate standard errors and confidence intervals for treatment effect estimates based on the observed information and provide R code to illustrate how to implement the method. Assumptions required for the pairwise approach (as well as PERR) are clarified, and the consequences of model misspecification are explored. Our results confirm the need for researchers to consider carefully the suitability of the method in the context of each problem. Extensions of the pairwise likelihood to more complex designs involving time-varying covariates or more than two periods are considered. We illustrate the application of the method using data from a longitudinal cohort study of enzyme replacement therapy for lysosomal storage disorders. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Covariant mutually unbiased bases
NASA Astrophysics Data System (ADS)
Carmeli, Claudio; Schultz, Jussi; Toigo, Alessandro
2016-06-01
The connection between maximal sets of mutually unbiased bases (MUBs) in a prime-power dimensional Hilbert space and finite phase-space geometries is well known. In this article, we classify MUBs according to their degree of covariance with respect to the natural symmetries of a finite phase-space, which are the group of its affine symplectic transformations. We prove that there exist maximal sets of MUBs that are covariant with respect to the full group only in odd prime-power dimensional spaces, and in this case, their equivalence class is actually unique. Despite this limitation, we show that in dimension 2r covariance can still be achieved by restricting to proper subgroups of the symplectic group, that constitute the finite analogues of the oscillator group. For these subgroups, we explicitly construct the unitary operators yielding the covariance.
Covariant Noncommutative Field Theory
Estrada-Jimenez, S.; Garcia-Compean, H.; Obregon, O.; Ramirez, C.
2008-07-02
The covariant approach to noncommutative field and gauge theories is revisited. In the process the formalism is applied to field theories invariant under diffeomorphisms. Local differentiable forms are defined in this context. The lagrangian and hamiltonian formalism is consistently introduced.
NASA Technical Reports Server (NTRS)
Ricks, W. R.
1994-01-01
PWC is used for pair-wise comparisons in both psychometric scaling techniques and cognitive research. The cognitive tasks and processes of a human operator of automated systems are now prominent considerations when defining system requirements. Recent developments in cognitive research have emphasized the potential utility of psychometric scaling techniques, such as multidimensional scaling, for representing human knowledge and cognitive processing structures. Such techniques involve collecting measurements of stimulus-relatedness from human observers. When data are analyzed using this scaling approach, an n-dimensional representation of the stimuli is produced. This resulting representation is said to describe the subject's cognitive or perceptual view of the stimuli. PWC applies one of the many techniques commonly used to acquire the data necessary for these types of analyses: pair-wise comparisons. PWC administers the task, collects the data from the test subject, and formats the data for analysis. It therefore addresses many of the limitations of the traditional "pen-and-paper" methods. By automating the data collection process, subjects are prevented from going back to check previous responses, the possibility of erroneous data transfer is eliminated, and the burden of the administration and taking of the test is eased. By using randomization, PWC ensures that subjects see the stimuli pairs presented in random order, and that each subject sees pairs in a different random order. PWC is written in Turbo Pascal v6.0 for IBM PC compatible computers running MS-DOS. The program has also been successfully compiled with Turbo Pascal v7.0. A sample executable is provided. PWC requires 30K of RAM for execution. The standard distribution medium for this program is a 5.25 inch 360K MS-DOS format diskette. Two electronic versions of the documentation are included on the diskette: one in ASCII format and one in MS Word for Windows format. PWC was developed in 1993.
Santolini, Marc; Mora, Thierry; Hakim, Vincent
2014-01-01
The identification of transcription factor binding sites (TFBSs) on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs), in which each DNA base pair contributes independently to the transcription factor (TF) binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM), a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting TFBSs beyond
Reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization
Shi, Xin Zhao, Xiangmo Hui, Fei Ma, Junyan Yang, Lan
2014-10-06
Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations is constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.
Covariant Bardeen perturbation formalism
NASA Astrophysics Data System (ADS)
Vitenti, S. D. P.; Falciano, F. T.; Pinto-Neto, N.
2014-05-01
In a previous work we obtained a set of necessary conditions for the linear approximation in cosmology. Here we discuss the relations of this approach with the so-called covariant perturbations. It is often argued in the literature that one of the main advantages of the covariant approach to describe cosmological perturbations is that the Bardeen formalism is coordinate dependent. In this paper we will reformulate the Bardeen approach in a completely covariant manner. For that, we introduce the notion of pure and mixed tensors, which yields an adequate language to treat both perturbative approaches in a common framework. We then stress that in the referred covariant approach, one necessarily introduces an additional hypersurface choice to the problem. Using our mixed and pure tensors approach, we are able to construct a one-to-one map relating the usual gauge dependence of the Bardeen formalism with the hypersurface dependence inherent to the covariant approach. Finally, through the use of this map, we define full nonlinear tensors that at first order correspond to the three known gauge invariant variables Φ, Ψ and Ξ, which are simultaneously foliation and gauge invariant. We then stress that the use of the proposed mixed tensors allows one to construct simultaneously gauge and hypersurface invariant variables at any order.
NASA Astrophysics Data System (ADS)
Frasinski, Leszek J.
2016-08-01
Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.
Covariance Applications with Kiwi
NASA Astrophysics Data System (ADS)
Mattoon, C. M.; Brown, D.; Elliott, J. B.
2012-05-01
The Computational Nuclear Physics group at Lawrence Livermore National Laboratory (LLNL) is developing a new tool, named `Kiwi', that is intended as an interface between the covariance data increasingly available in major nuclear reaction libraries (including ENDF and ENDL) and large-scale Uncertainty Quantification (UQ) studies. Kiwi is designed to integrate smoothly into large UQ studies, using the covariance matrix to generate multiple variations of nuclear data. The code has been tested using critical assemblies as a test case, and is being integrated into LLNL's quality assurance and benchmarking for nuclear data.
Improving Predictions in Imbalanced Data Using Pairwise Expanded Logistic Regression
Jiang, Xiaoqian; El-Kareh, Robert; Ohno-Machado, Lucila
2011-01-01
Building classifiers for medical problems often involves dealing with rare, but important events. Imbalanced datasets pose challenges to ordinary classification algorithms such as Logistic Regression (LR) and Support Vector Machines (SVM). The lack of effective strategies for dealing with imbalanced training data often results in models that exhibit poor discrimination. We propose a novel approach to estimate class memberships based on the evaluation of pairwise relationships in the training data. The method we propose, Pairwise Expanded Logistic Regression, improved discrimination and had higher accuracy when compared to existing methods in two imbalanced datasets, thus showing promise as a potential remedy for this problem. PMID:22195118
NASA Astrophysics Data System (ADS)
De Bernardis, F.; Aiola, S.; Vavagiakis, E. M.; Battaglia, N.; Niemack, M. D.; Beall, J.; Becker, D. T.; Bond, J. R.; Calabrese, E.; Cho, H.; Coughlin, K.; Datta, R.; Devlin, M.; Dunkley, J.; Dunner, R.; Ferraro, S.; Fox, A.; Gallardo, P. A.; Halpern, M.; Hand, N.; Hasselfield, M.; Henderson, S. W.; Hill, J. C.; Hilton, G. C.; Hilton, M.; Hincks, A. D.; Hlozek, R.; Hubmayr, J.; Huffenberger, K.; Hughes, J. P.; Irwin, K. D.; Koopman, B. J.; Kosowsky, A.; Li, D.; Louis, T.; Lungu, M.; Madhavacheril, M. S.; Maurin, L.; McMahon, J.; Moodley, K.; Naess, S.; Nati, F.; Newburgh, L.; Nibarger, J. P.; Page, L. A.; Partridge, B.; Schaan, E.; Schmitt, B. L.; Sehgal, N.; Sievers, J.; Simon, S. M.; Spergel, D. N.; Staggs, S. T.; Stevens, J. R.; Thornton, R. J.; van Engelen, A.; Van Lanen, J.; Wollack, E. J.
2017-03-01
We present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.
Dynamics of pairwise motions in the Cosmic Web
NASA Astrophysics Data System (ADS)
Hellwing, Wojciech A.
2016-10-01
We present results of analysis of the dark matter (DM) pairwise velocity statistics in different Cosmic Web environments. We use the DM velocity and density field from the Millennium 2 simulation together with the NEXUS+ algorithm to segment the simulation volume into voxels uniquely identifying one of the four possible environments: nodes, filaments, walls or cosmic voids. We show that the PDFs of the mean infall velocities v 12 as well as its spatial dependence together with the perpendicular and parallel velocity dispersions bear a significant signal of the large-scale structure environment in which DM particle pairs are embedded. The pairwise flows are notably colder and have smaller mean magnitude in wall and voids, when compared to much denser environments of filaments and nodes. We discuss on our results, indicating that they are consistent with a simple theoretical predictions for pairwise motions as induced by gravitational instability mechanism. Our results indicate that the Cosmic Web elements are coherent dynamical entities rather than just temporal geometrical associations. In addition it should be possible to observationally test various Cosmic Web finding algorithms by segmenting available peculiar velocity data and studying resulting pairwise velocity statistics.
Discovering Pair-wise Synergies in Microarray Data
Chen, Yuan; Cao, Dan; Gao, Jun; Yuan, Zheming
2016-01-01
Informative gene selection can have important implications for the improvement of cancer diagnosis and the identification of new drug targets. Individual-gene-ranking methods ignore interactions between genes. Furthermore, popular pair-wise gene evaluation methods, e.g. TSP and TSG, are helpless for discovering pair-wise interactions. Several efforts to discover pair-wise synergy have been made based on the information approach, such as EMBP and FeatKNN. However, the methods which are employed to estimate mutual information, e.g. binarization, histogram-based and KNN estimators, depend on known data or domain characteristics. Recently, Reshef et al. proposed a novel maximal information coefficient (MIC) measure to capture a wide range of associations between two variables that has the property of generality. An extension from MIC(X; Y) to MIC(X1; X2; Y) is therefore desired. We developed an approximation algorithm for estimating MIC(X1; X2; Y) where Y is a discrete variable. MIC(X1; X2; Y) is employed to detect pair-wise synergy in simulation and cancer microarray data. The results indicate that MIC(X1; X2; Y) also has the property of generality. It can discover synergic genes that are undetectable by reference feature selection methods such as MIC(X; Y) and TSG. Synergic genes can distinguish different phenotypes. Finally, the biological relevance of these synergic genes is validated with GO annotation and OUgene database. PMID:27470995
Five key factors determining pairwise correlations in visual cortex
Sahani, Maneesh; Carandini, Matteo
2015-01-01
The responses of cortical neurons to repeated presentation of a stimulus are highly variable, yet correlated. These “noise correlations” reflect a low-dimensional structure of population dynamics. Here, we examine noise correlations in 22,705 pairs of neurons in primary visual cortex (V1) of anesthetized cats, during ongoing activity and in response to artificial and natural visual stimuli. We measured how noise correlations depend on 11 factors. Because these factors are themselves not independent, we distinguished their influences using a nonlinear additive model. The model revealed that five key factors play a predominant role in determining pairwise correlations. Two of these are distance in cortex and difference in sensory tuning: these are known to decrease correlation. A third factor is firing rate: confirming most earlier observations, it markedly increased pairwise correlations. A fourth factor is spike width: cells with a broad spike were more strongly correlated amongst each other. A fifth factor is spike isolation: neurons with worse isolation were more correlated, even if they were recorded on different electrodes. For pairs of neurons with poor isolation, this last factor was the main determinant of correlations. These results were generally independent of stimulus type and timescale of analysis, but there were exceptions. For instance, pairwise correlations depended on difference in orientation tuning more during responses to gratings than to natural stimuli. These results consolidate disjoint observations in a vast literature on pairwise correlations and point towards regularities of population coding in sensory cortex. PMID:26019310
Analysis of Pairwise Preference Data Using Integrated B-SPLINES.
ERIC Educational Resources Information Center
Winsberg, Suzanne; Ramsay, James O.
1981-01-01
A general method of scaling pairwise preference data is presented that may be used without prior knowledge about the nature of the relationship between an observation and the process giving rise to it. The method involves a monotone transformation and is similar to the B-SPLINE approach. (Author/JKS)
Pairwise Identity Verification via Linear Concentrative Metric Learning.
Zheng, Lilei; Duffner, Stefan; Idrissi, Khalid; Garcia, Christophe; Baskurt, Atilla
2016-12-16
This paper presents a study of metric learning systems on pairwise identity verification, including pairwise face verification and pairwise speaker verification, respectively. These problems are challenging because the individuals in training and testing are mutually exclusive, and also due to the probable setting of limited training data. For such pairwise verification problems, we present a general framework of metric learning systems and employ the stochastic gradient descent algorithm as the optimization solution. We have studied both similarity metric learning and distance metric learning systems, of either a linear or shallow nonlinear model under both restricted and unrestricted training settings. Extensive experiments demonstrate that with limited training pairs, learning a linear system on similar pairs only is preferable due to its simplicity and superiority, i.e., it generally achieves competitive performance on both the labeled faces in the wild face dataset and the NIST speaker dataset. It is also found that a pretrained deep nonlinear model helps to improve the face verification results significantly.
Using Analysis of Covariance (ANCOVA) with Fallible Covariates
ERIC Educational Resources Information Center
Culpepper, Steven Andrew; Aguinis, Herman
2011-01-01
Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but…
NASA Astrophysics Data System (ADS)
Reinisch, Elena C.; Cardiff, Michael; Feigl, Kurt L.
2017-01-01
Graph theory is useful for analyzing time-dependent model parameters estimated from interferometric synthetic aperture radar (InSAR) data in the temporal domain. Plotting acquisition dates (epochs) as vertices and pair-wise interferometric combinations as edges defines an incidence graph. The edge-vertex incidence matrix and the normalized edge Laplacian matrix are factors in the covariance matrix for the pair-wise data. Using empirical measures of residual scatter in the pair-wise observations, we estimate the relative variance at each epoch by inverting the covariance of the pair-wise data. We evaluate the rank deficiency of the corresponding least-squares problem via the edge-vertex incidence matrix. We implement our method in a MATLAB software package called GraphTreeTA available on GitHub (https://github.com/feigl/gipht). We apply temporal adjustment to the data set described in Lu et al. (Geophys Res Solid Earth 110, 2005) at Okmok volcano, Alaska, which erupted most recently in 1997 and 2008. The data set contains 44 differential volumetric changes and uncertainties estimated from interferograms between 1997 and 2004. Estimates show that approximately half of the magma volume lost during the 1997 eruption was recovered by the summer of 2003. Between June 2002 and September 2003, the estimated rate of volumetric increase is (6.2 ± 0.6) × 10^6 m^3/year . Our preferred model provides a reasonable fit that is compatible with viscoelastic relaxation in the five years following the 1997 eruption. Although we demonstrate the approach using volumetric rates of change, our formulation in terms of incidence graphs applies to any quantity derived from pair-wise differences, such as range change, range gradient, or atmospheric delay.
Covariant deformed oscillator algebras
NASA Technical Reports Server (NTRS)
Quesne, Christiane
1995-01-01
The general form and associativity conditions of deformed oscillator algebras are reviewed. It is shown how the latter can be fulfilled in terms of a solution of the Yang-Baxter equation when this solution has three distinct eigenvalues and satisfies a Birman-Wenzl-Murakami condition. As an example, an SU(sub q)(n) x SU(sub q)(m)-covariant q-bosonic algebra is discussed in some detail.
The Bayesian Covariance Lasso.
Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G
2013-04-01
Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size (n) is less than the dimension (d), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.
Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G.
2012-01-01
Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size (n) is less than the dimension (d), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data. PMID:24551316
Attention Deficit Disorder (ADD). Digest #445.
ERIC Educational Resources Information Center
Scott, Mary E.
The term "attention deficit disorder" (ADD) is defined, criteria used by the American Psychiatric Association in diagnosing ADD are listed, and possible causes noted. Remediation needs of children with ADD include attention skills, self-esteem, and social skills. Early diagnosis is important, and teachers and parents need to identify…
Temporal pairwise spike correlations fully capture single-neuron information
Dettner, Amadeus; Münzberg, Sabrina; Tchumatchenko, Tatjana
2016-01-01
To crack the neural code and read out the information neural spikes convey, it is essential to understand how the information is coded and how much of it is available for decoding. To this end, it is indispensable to derive from first principles a minimal set of spike features containing the complete information content of a neuron. Here we present such a complete set of coding features. We show that temporal pairwise spike correlations fully determine the information conveyed by a single spiking neuron with finite temporal memory and stationary spike statistics. We reveal that interspike interval temporal correlations, which are often neglected, can significantly change the total information. Our findings provide a conceptual link between numerous disparate observations and recommend shifting the focus of future studies from addressing firing rates to addressing pairwise spike correlation functions as the primary determinants of neural information. PMID:27976717
A Pairwise Preferential Interaction Model for Understanding Peptide Aggregation
Kang, Myungshim
2010-01-01
A pairwise preferential interaction model (PPIM), based on Kirkwood–Buff integrals, is developed to quantify and characterize the interactions between some of the functional groups commonly observed in peptides. The existing experimental data are analyzed to determine the preferential interaction (PI) parameters for different amino acid and small peptide systems in aqueous solutions. The PIs between the different functional groups present in the peptides are then isolated and quantified by assuming simple pairwise additivity. The PPIM approach provides consistent estimates for the pair interactions between the same functional groups obtained from different solute molecules. Furthermore, these interactions appear to be chemically intuitive. It is argued that this type of approach can provide valuable information concerning specific functional group correlations which could give rise to peptide aggregation. PMID:20694045
Covariant magnetic connection hypersurfaces
NASA Astrophysics Data System (ADS)
Pegoraro, F.
2016-04-01
> In the single fluid, non-relativistic, ideal magnetohydrodynamic (MHD) plasma description, magnetic field lines play a fundamental role by defining dynamically preserved `magnetic connections' between plasma elements. Here we show how the concept of magnetic connection needs to be generalized in the case of a relativistic MHD description where we require covariance under arbitrary Lorentz transformations. This is performed by defining 2-D magnetic connection hypersurfaces in the 4-D Minkowski space. This generalization accounts for the loss of simultaneity between spatially separated events in different frames and is expected to provide a powerful insight into the 4-D geometry of electromagnetic fields when .
Simulations of the Pairwise Kinematic Sunyaev-Zel’dovich Signal
NASA Astrophysics Data System (ADS)
Flender, Samuel; Bleem, Lindsey; Finkel, Hal; Habib, Salman; Heitmann, Katrin; Holder, Gilbert
2016-06-01
The pairwise kinematic Sunyaev-Zel’dovich (kSZ) signal from galaxy clusters is a probe of their line of sight momenta, and thus a potentially valuable source of cosmological information. In addition to the momenta, the amplitude of the measured signal depends on the properties of the intracluster gas and observational limitations such as errors in determining cluster centers and redshifts. In this work, we simulate the pairwise kSZ signal of clusters at z\\lt 1, using the output from a cosmological N-body simulation and including the properties of the intracluster gas via a model that can be varied in post-processing. We find that modifications to the gas profile due to star formation and feedback reduce the pairwise kSZ amplitude of clusters by ˜ 50%, relative to the naive “gas traces mass” assumption. We demonstrate that miscentering can reduce the overall amplitude of the pairwise kSZ signal by up to 10%, while redshift errors can lead to an almost complete suppression of the signal at small separations. We confirm that a high-significance detection is expected from the combination of data from current generation, high-resolution cosmic microwave background experiments, such as the South Pole Telescope, and cluster samples from optical photometric surveys, such as the Dark Energy Survey. Furthermore, we forecast that future experiments such as Advanced ACTPol in conjunction with data from the Dark Energy Spectroscopic Instrument will yield detection significances of at least 20σ , and up to 57σ in an optimistic scenario. Our simulated maps are publicly available at http://www.hep.anl.gov/cosmology/ksz.html.
Simulations of the pairwise kinematic Sunyaev-Zel'dovich signal
Flender, Samuel; Bleem, Lindsey; Finkel, Hal; Habib, Salman; Heitmann, Katrin; Holder, Gilbert
2016-05-26
The pairwise kinematic Sunyaev–Zel'dovich (kSZ) signal from galaxy clusters is a probe of their line of sight momenta, and thus a potentially valuable source of cosmological information. In addition to the momenta, the amplitude of the measured signal depends on the properties of the intracluster gas and observational limitations such as errors in determining cluster centers and redshifts. In this work, we simulate the pairwise kSZ signal of clusters at $z\\lt 1$, using the output from a cosmological N-body simulation and including the properties of the intracluster gas via a model that can be varied in post-processing. We find that modifications to the gas profile due to star formation and feedback reduce the pairwise kSZ amplitude of clusters by $\\sim 50\\%$, relative to the naive "gas traces mass" assumption. We demonstrate that miscentering can reduce the overall amplitude of the pairwise kSZ signal by up to 10%, while redshift errors can lead to an almost complete suppression of the signal at small separations. We confirm that a high-significance detection is expected from the combination of data from current generation, high-resolution cosmic microwave background experiments, such as the South Pole Telescope, and cluster samples from optical photometric surveys, such as the Dark Energy Survey. As a result, we forecast that future experiments such as Advanced ACTPol in conjunction with data from the Dark Energy Spectroscopic Instrument will yield detection significances of at least $20\\sigma $, and up to $57\\sigma $ in an optimistic scenario.
Simulations of the pairwise kinematic Sunyaev-Zel'dovich signal
Flender, Samuel; Bleem, Lindsey; Finkel, Hal; ...
2016-05-26
The pairwise kinematic Sunyaev–Zel'dovich (kSZ) signal from galaxy clusters is a probe of their line of sight momenta, and thus a potentially valuable source of cosmological information. In addition to the momenta, the amplitude of the measured signal depends on the properties of the intracluster gas and observational limitations such as errors in determining cluster centers and redshifts. In this work, we simulate the pairwise kSZ signal of clusters atmore » $$z\\lt 1$$, using the output from a cosmological N-body simulation and including the properties of the intracluster gas via a model that can be varied in post-processing. We find that modifications to the gas profile due to star formation and feedback reduce the pairwise kSZ amplitude of clusters by $$\\sim 50\\%$$, relative to the naive "gas traces mass" assumption. We demonstrate that miscentering can reduce the overall amplitude of the pairwise kSZ signal by up to 10%, while redshift errors can lead to an almost complete suppression of the signal at small separations. We confirm that a high-significance detection is expected from the combination of data from current generation, high-resolution cosmic microwave background experiments, such as the South Pole Telescope, and cluster samples from optical photometric surveys, such as the Dark Energy Survey. As a result, we forecast that future experiments such as Advanced ACTPol in conjunction with data from the Dark Energy Spectroscopic Instrument will yield detection significances of at least $$20\\sigma $$, and up to $$57\\sigma $$ in an optimistic scenario.« less
Theiler, James P; Cao, Guangzhi; Bouman, Charles A
2009-01-01
Many detection algorithms in hyperspectral image analysis, from well-characterized gaseous and solid targets to deliberately uncharacterized anomalies and anomlous changes, depend on accurately estimating the covariance matrix of the background. In practice, the background covariance is estimated from samples in the image, and imprecision in this estimate can lead to a loss of detection power. In this paper, we describe the sparse matrix transform (SMT) and investigate its utility for estimating the covariance matrix from a limited number of samples. The SMT is formed by a product of pairwise coordinate (Givens) rotations, which can be efficiently estimated using greedy optimization. Experiments on hyperspectral data show that the estimate accurately reproduces even small eigenvalues and eigenvectors. In particular, we find that using the SMT to estimate the covariance matrix used in the adaptive matched filter leads to consistently higher signal-to-noise ratios.
Support vector machine with hypergraph-based pairwise constraints.
Hou, Qiuling; Lv, Meng; Zhen, Ling; Jing, Ling
2016-01-01
Although support vector machine (SVM) has become a powerful tool for pattern classification and regression, a major disadvantage is it fails to exploit the underlying correlation between any pair of data points as much as possible. Inspired by the modified pairwise constraints trick, in this paper, we propose a novel classifier termed as support vector machine with hypergraph-based pairwise constraints to improve the performance of the classical SVM by introducing a new regularization term with hypergraph-based pairwise constraints (HPC). The new classifier is expected to not only learn the structural information of each point itself, but also acquire the prior distribution knowledge about each constrained pair by combining the discrimination metric and hypergraph learning together. Three major contributions of this paper can be summarized as follows: (1) acquiring the high-order relationships between different samples by hypergraph learning; (2) presenting a more reasonable discriminative regularization term by combining the discrimination metric and hypergraph learning; (3) improving the performance of the existing SVM classifier by introducing HPC regularization term. And the comprehensive experimental results on twenty-five datasets demonstrate the validity and advantage of our approach.
A fast pairwise evaluation of molecular surface area.
Vasilyev, Vladislav; Purisima, Enrico O
2002-05-01
A fast and general analytical approach was developed for the calculation of the approximate van der Waals and solvent-accessible surface areas. The method is based on three basic ideas: the use of the Lorentz transformation formula, a rigid-geometry approximation, and a single fitting parameter that can be refitted on the fly during a simulation. The Lorentz transformation equation is used for the summation of the areas of an atom buried by its neighboring contacting atoms, and implies that a sum of the buried pairwise areas cannot be larger than the surface area of the isolated spherical atom itself. In a rigid-geometry approximation we numerically calculate and keep constant the surface of each atom buried by the atoms involved in 1-2 and 1-3 interactions. Only the contributions from the nonbonded atoms (1-4 and higher interactions) are considered in terms of the pairwise approximation. The accuracy and speed of the method is competitive with other pairwise algorithms. A major strength of the method is the ease of parametrization.
Pairwise KLT-Based Compression for Multispectral Images
NASA Astrophysics Data System (ADS)
Nian, Yongjian; Liu, Yu; Ye, Zhen
2016-12-01
This paper presents a pairwise KLT-based compression algorithm for multispectral images. Although the KLT has been widely employed for spectral decorrelation, its complexity is high if it is performed on the global multispectral images. To solve this problem, this paper presented a pairwise KLT for spectral decorrelation, where KLT is only performed on two bands every time. First, KLT is performed on the first two adjacent bands and two principle components are obtained. Secondly, one remainning band and the principal component (PC) with the larger eigenvalue is selected to perform a KLT on this new couple. This procedure is repeated until the last band is reached. Finally, the optimal truncation technique of post-compression rate-distortion optimization is employed for the rate allocation of all the PCs, followed by embedded block coding with optimized truncation to generate the final bit-stream. Experimental results show that the proposed algorithm outperforms the algorithm based on global KLT. Moreover, the pairwise KLT structure can significantly reduce the complexity compared with a global KLT.
Deriving covariant holographic entanglement
NASA Astrophysics Data System (ADS)
Dong, Xi; Lewkowycz, Aitor; Rangamani, Mukund
2016-11-01
We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Rényi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.
Stardust Navigation Covariance Analysis
NASA Technical Reports Server (NTRS)
Menon, Premkumar R.
2000-01-01
The Stardust spacecraft was launched on February 7, 1999 aboard a Boeing Delta-II rocket. Mission participants include the National Aeronautics and Space Administration (NASA), the Jet Propulsion Laboratory (JPL), Lockheed Martin Astronautics (LMA) and the University of Washington. The primary objective of the mission is to collect in-situ samples of the coma of comet Wild-2 and return those samples to the Earth for analysis. Mission design and operational navigation for Stardust is performed by the Jet Propulsion Laboratory (JPL). This paper will describe the extensive JPL effort in support of the Stardust pre-launch analysis of the orbit determination component of the mission covariance study. A description of the mission and it's trajectory will be provided first, followed by a discussion of the covariance procedure and models. Predicted accuracy's will be examined as they relate to navigation delivery requirements for specific critical events during the mission. Stardust was launched into a heliocentric trajectory in early 1999. It will perform an Earth Gravity Assist (EGA) on January 15, 2001 to acquire an orbit for the eventual rendezvous with comet Wild-2. The spacecraft will fly through the coma (atmosphere) on the dayside of Wild-2 on January 2, 2004. At that time samples will be obtained using an aerogel collector. After the comet encounter Stardust will return to Earth when the Sample Return Capsule (SRC) will separate and land at the Utah Test Site (UTTR) on January 15, 2006. The spacecraft will however be deflected off into a heliocentric orbit. The mission is divided into three phases for the covariance analysis. They are 1) Launch to EGA, 2) EGA to Wild-2 encounter and 3) Wild-2 encounter to Earth reentry. Orbit determination assumptions for each phase are provided. These include estimated and consider parameters and their associated a-priori uncertainties. Major perturbations to the trajectory include 19 deterministic and statistical maneuvers
Covariance Manipulation for Conjunction Assessment
NASA Technical Reports Server (NTRS)
Hejduk, M. D.
2016-01-01
Use of probability of collision (Pc) has brought sophistication to CA. Made possible by JSpOC precision catalogue because provides covariance. Has essentially replaced miss distance as basic CA parameter. Embrace of Pc has elevated methods to 'manipulate' covariance to enable/improve CA calculations. Two such methods to be examined here; compensation for absent or unreliable covariances through 'Maximum Pc' calculation constructs, projection (not propagation) of epoch covariances forward in time to try to enable better risk assessments. Two questions to be answered about each; situations to which such approaches are properly applicable, amount of utility that such methods offer.
Johnson, Brent A; Long, Qi
2011-06-01
Lung cancer is among the most common cancers in the United States, in terms of incidence and mortality. In 2009, it is estimated that more than 150,000 deaths will result from lung cancer alone. Genetic information is an extremely valuable data source in characterizing the personal nature of cancer. Over the past several years, investigators have conducted numerous association studies where intensive genetic data is collected on relatively few patients compared to the numbers of gene predictors, with one scientific goal being to identify genetic features associated with cancer recurrence or survival. In this note, we propose high-dimensional survival analysis through a new application of boosting, a powerful tool in machine learning. Our approach is based on an accelerated lifetime model and minimizing the sum of pairwise differences in residuals. We apply our method to a recent microarray study of lung adenocarcinoma and find that our ensemble is composed of 19 genes while a proportional hazards (PH) ensemble is composed of nine genes, a proper subset of the 19-gene panel. In one of our simulation scenarios, we demonstrate that PH boosting in a misspecified model tends to underfit and ignore moderately-sized covariate effects, on average. Diagnostic analyses suggest that the PH assumption is not satisfied in the microarray data and may explain, in part, the discrepancy in the sets of active coefficients. Our simulation studies and comparative data analyses demonstrate how statistical learning by PH models alone is insufficient.
Revision of Begomovirus taxonomy based on pairwise sequence comparisons.
Brown, Judith K; Zerbini, F Murilo; Navas-Castillo, Jesús; Moriones, Enrique; Ramos-Sobrinho, Roberto; Silva, José C F; Fiallo-Olivé, Elvira; Briddon, Rob W; Hernández-Zepeda, Cecilia; Idris, Ali; Malathi, V G; Martin, Darren P; Rivera-Bustamante, Rafael; Ueda, Shigenori; Varsani, Arvind
2015-06-01
Viruses of the genus Begomovirus (family Geminiviridae) are emergent pathogens of crops throughout the tropical and subtropical regions of the world. By virtue of having a small DNA genome that is easily cloned, and due to the recent innovations in cloning and low-cost sequencing, there has been a dramatic increase in the number of available begomovirus genome sequences. Even so, most of the available sequences have been obtained from cultivated plants and are likely a small and phylogenetically unrepresentative sample of begomovirus diversity, a factor constraining taxonomic decisions such as the establishment of operationally useful species demarcation criteria. In addition, problems in assigning new viruses to established species have highlighted shortcomings in the previously recommended mechanism of species demarcation. Based on the analysis of 3,123 full-length begomovirus genome (or DNA-A component) sequences available in public databases as of December 2012, a set of revised guidelines for the classification and nomenclature of begomoviruses are proposed. The guidelines primarily consider a) genus-level biological characteristics and b) results obtained using a standardized classification tool, Sequence Demarcation Tool, which performs pairwise sequence alignments and identity calculations. These guidelines are consistent with the recently published recommendations for the genera Mastrevirus and Curtovirus of the family Geminiviridae. Genome-wide pairwise identities of 91 % and 94 % are proposed as the demarcation threshold for begomoviruses belonging to different species and strains, respectively. Procedures and guidelines are outlined for resolving conflicts that may arise when assigning species and strains to categories wherever the pairwise identity falls on or very near the demarcation threshold value.
CSA: comprehensive comparison of pairwise protein structure alignments
Wohlers, Inken; Malod-Dognin, Noël; Andonov, Rumen; Klau, Gunnar W.
2012-01-01
CSA is a web server for the computation, evaluation and comprehensive comparison of pairwise protein structure alignments. Its exact alignment engine computes either optimal, top-scoring alignments or heuristic alignments with quality guarantee for the inter-residue distance-based scorings of contact map overlap, PAUL, DALI and MATRAS. These and additional, uploaded alignments are compared using a number of quality measures and intuitive visualizations. CSA brings new insight into the structural relationship of the protein pairs under investigation and is a valuable tool for studying structural similarities. It is available at http://csa.project.cwi.nl. PMID:22553365
CSA: comprehensive comparison of pairwise protein structure alignments.
Wohlers, Inken; Malod-Dognin, Noël; Andonov, Rumen; Klau, Gunnar W
2012-07-01
CSA is a web server for the computation, evaluation and comprehensive comparison of pairwise protein structure alignments. Its exact alignment engine computes either optimal, top-scoring alignments or heuristic alignments with quality guarantee for the inter-residue distance-based scorings of contact map overlap, PAUL, DALI and MATRAS. These and additional, uploaded alignments are compared using a number of quality measures and intuitive visualizations. CSA brings new insight into the structural relationship of the protein pairs under investigation and is a valuable tool for studying structural similarities. It is available at http://csa.project.cwi.nl.
Inviting Calm Within: ADD, Neurology, and Mindfulness
ERIC Educational Resources Information Center
Riner, Phillip S.; Tanase, Madalina
2014-01-01
The fourth edition of the "Diagnostic and Statistical Manual of Mental Disorders" ("DSM IV") describes ADD as behaviorally observed impairments in attention, impulsivity, and hyperactivity. Officially known as AD/HD, we use ADD here because we are dealing primarily with attention, organizational, and impulsivity issues. A more…
ADD: Acronym for Any Dysfunction or Difficulty.
ERIC Educational Resources Information Center
Goodman, Gay; Poillion, Mary Jo
1992-01-01
Review of 48 articles and books on attention deficit disorder (ADD) found a total of 69 characteristics and 38 causes cited, evidencing no clearcut pattern for identifying the condition and little agreement for what causes ADD. The label appears to have limited value for communication, planning, decision making, or research efforts. (Author/DB)
Covariant harmonic oscillators: 1973 revisited
NASA Technical Reports Server (NTRS)
Noz, M. E.
1993-01-01
Using the relativistic harmonic oscillator, a physical basis is given to the phenomenological wave function of Yukawa which is covariant and normalizable. It is shown that this wave function can be interpreted in terms of the unitary irreducible representations of the Poincare group. The transformation properties of these covariant wave functions are also demonstrated.
Covariance hypotheses for LANDSAT data
NASA Technical Reports Server (NTRS)
Decell, H. P.; Peters, C.
1983-01-01
Two covariance hypotheses are considered for LANDSAT data acquired by sampling fields, one an autoregressive covariance structure and the other the hypothesis of exchangeability. A minimum entropy approximation of the first structure by the second is derived and shown to have desirable properties for incorporation into a mixture density estimation procedure. Results of a rough test of the exchangeability hypothesis are presented.
Donegan, Sarah; Williamson, Paula; D'Alessandro, Umberto; Tudur Smith, Catrin
2012-12-20
Mixed treatment comparison (MTC) meta-analysis allows several treatments to be compared in a single analysis while utilising direct and indirect evidence. Treatment by covariate interactions can be included in MTC models to explore how the covariate modifies the treatment effects. If interactions exist, the assumptions underlying MTCs may be invalidated. For conventional pair-wise meta-analysis, important benefits regarding the investigation of such interactions, gained from using individual patient data (IPD) rather than aggregate data (AD), have been described. We aim to compare IPD MTC models including patient-level covariates with AD MTC models including study-level covariates. IPD and AD random-effects MTC models for dichotomous outcomes are specified. Three assumptions are made regarding the interactions (i.e. independent, exchangeable and common interactions). The models are applied to a dataset to compare four drugs for treating malaria (i.e. amodiaquine-artesunate, dihydroartemisinin-piperaquine (DHAPQ), artemether-lumefantrine and chlorproguanil-dapsone plus artesunate) using the outcome unadjusted treatment success at day 28. The treatment effects and regression coefficients for interactions from the IPD models were more precise than those from AD models. Using IPD, assuming independent or exchangeable interactions, the regression coefficient for chlorproguanil-dapsone plus artesunate versus DHAPQ was statistically significant and assuming common interactions, the common coefficient was significant; whereas using AD, no coefficients were significant. Using IPD, DHAPQ was the best drug; whereas using AD, the best drug varied. Using AD models, there was no evidence that the consistency assumption was invalid; whereas, the assumption was questionable based on the IPD models. The AD analyses were misleading.
Pairwise velocities in the "Running FLRW" cosmological model
NASA Astrophysics Data System (ADS)
Bibiano, Antonio; Croton, Darren J.
2017-01-01
We present an analysis of the pairwise velocity statistics from a suite of cosmological N-body simulations describing the "Running Friedmann-Lemaître-Robertson-Walker" (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends ΛCDM with a time-evolving vacuum energy density, ρ _Λ. To enforce local conservation of matter a time-evolving gravitational coupling is also included. Our results constitute the first study of velocities in the R-FLRW cosmology, and we also compare with other dark energy simulations suites, repeating the same analysis. We find a strong degeneracy between the pairwise velocity and σ8 at z = 0 for almost all scenarios considered, which remains even when we look back to epochs as early as z = 2. We also investigate various Coupled Dark Energy models, some of which show minimal degeneracy, and reveal interesting deviations from ΛCDM which could be readily exploited by future cosmological observations to test and further constrain our understanding of dark energy.
Congruence of Behavioral Symptomatology in Children with ADD/H, ADD/WO, and Learning Disabilities.
ERIC Educational Resources Information Center
Stanford, Lisa D.; Hynd, George W.
1994-01-01
This study compared parent and teacher behavioral ratings for 77 children (ages 5-16) diagnosed as having attention deficit disorder with hyperactivity (ADD/H), attention deficit disorder without hyperactivity (ADD/WO), or learning disabilities (LD). ADD/WO and LD children were rated similarly on symptoms of withdrawal and impulsivity but differed…
Covariance Manipulation for Conjunction Assessment
NASA Technical Reports Server (NTRS)
Hejduk, M. D.
2016-01-01
The manipulation of space object covariances to try to provide additional or improved information to conjunction risk assessment is not an uncommon practice. Types of manipulation include fabricating a covariance when it is missing or unreliable to force the probability of collision (Pc) to a maximum value ('PcMax'), scaling a covariance to try to improve its realism or see the effect of covariance volatility on the calculated Pc, and constructing the equivalent of an epoch covariance at a convenient future point in the event ('covariance forecasting'). In bringing these methods to bear for Conjunction Assessment (CA) operations, however, some do not remain fully consistent with best practices for conducting risk management, some seem to be of relatively low utility, and some require additional information before they can contribute fully to risk analysis. This study describes some basic principles of modern risk management (following the Kaplan construct) and then examines the PcMax and covariance forecasting paradigms for alignment with these principles; it then further examines the expected utility of these methods in the modern CA framework. Both paradigms are found to be not without utility, but only in situations that are somewhat carefully circumscribed.
ERIC Educational Resources Information Center
Savalei, Victoria; Bentler, Peter M.
2005-01-01
This article proposes a new approach to the statistical analysis of pairwisepresent covariance structure data. The estimator is based on maximizing the complete data likelihood function, and the associated test statistic and standard errors are corrected for misspecification using Satorra-Bentler corrections. A Monte Carlo study was conducted to…
Covariance Models for Hydrological Applications
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios
2014-05-01
This methodological contribution aims to present some new covariance models with applications in the stochastic analysis of hydrological processes. More specifically, we present explicit expressions for radially symmetric, non-differentiable, Spartan covariance functions in one, two, and three dimensions. The Spartan covariance parameters include a characteristic length, an amplitude coefficient, and a rigidity coefficient which determines the shape of the covariance function. Different expressions are obtained depending on the value of the rigidity coefficient and the dimensionality. If the value of the rigidity coefficient is much larger than one, the Spartan covariance function exhibits multiscaling. Spartan covariance models are more flexible than the classical geostatatistical models (e.g., spherical, exponential). Their non-differentiability makes them suitable for modelling the properties of geological media. We also present a family of radially symmetric, infinitely differentiable Bessel-Lommel covariance functions which are valid in any dimension. These models involve combinations of Bessel and Lommel functions. They provide a generalization of the J-Bessel covariance function, and they can be used to model smooth processes with an oscillatory decay of correlations. We discuss the dependence of the integral range of the Spartan and Bessel-Lommel covariance functions on the parameters. We point out that the dependence is not uniquely specified by the characteristic length, unlike the classical geostatistical models. Finally, we define and discuss the use of the generalized spectrum for characterizing different correlation length scales; the spectrum is defined in terms of an exponent α. We show that the spectrum values obtained for exponent values less than one can be used to discriminate between mean-square continuous but non-differentiable random fields. References [1] D. T. Hristopulos and S. Elogne, 2007. Analytic properties and covariance functions of
76 FR 49508 - ``Add Us In'' Initiative
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-10
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF LABOR Office of Disability Employment Program ``Add Us In'' Initiative AGENCY: Office of Disability Employment Policy, Department of Labor. ACTION: Correction to the Funding Opportunity Number and Closing...
Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip.
Zamora, Jane Louie Fresco; Kashihara, Shigeru; Yamaguchi, Suguru
2015-01-01
Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values.
Prediction of Spatiotemporal Patterns of Neural Activity from Pairwise Correlations
Marre, O.; El Boustani, S.; Fregnac, Y.; Destexhe, A.
2009-04-03
We designed a model-based analysis to predict the occurrence of population patterns in distributed spiking activity. Using a maximum entropy principle with a Markovian assumption, we obtain a model that accounts for both spatial and temporal pairwise correlations among neurons. This model is tested on data generated with a Glauber spin-glass system and is shown to correctly predict the occurrence probabilities of spatiotemporal patterns significantly better than Ising models only based on spatial correlations. This increase of predictability was also observed on experimental data recorded in parietal cortex during slow-wave sleep. This approach can also be used to generate surrogates that reproduce the spatial and temporal correlations of a given data set.
Hash subgraph pairwise kernel for protein-protein interaction extraction.
Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian; Li, Yanpeng
2012-01-01
Extracting protein-protein interaction (PPI) from biomedical literature is an important task in biomedical text mining (BioTM). In this paper, we propose a hash subgraph pairwise (HSP) kernel-based approach for this task. The key to the novel kernel is to use the hierarchical hash labels to express the structural information of subgraphs in a linear time. We apply the graph kernel to compute dependency graphs representing the sentence structure for protein-protein interaction extraction task, which can efficiently make use of full graph structural information, and particularly capture the contiguous topological and label information ignored before. We evaluate the proposed approach on five publicly available PPI corpora. The experimental results show that our approach significantly outperforms all-path kernel approach on all five corpora and achieves state-of-the-art performance.
Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip
Zamora, Jane Louie Fresco; Kashihara, Shigeru; Yamaguchi, Suguru
2015-01-01
Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values. PMID:26421312
On the early epidemic dynamics for pairwise models.
Llensa, Carlos; Juher, David; Saldaña, Joan
2014-07-07
The relationship between the basic reproduction number R0 and the exponential growth rate, specific to pair approximation models, is derived for the SIS, SIR and SEIR deterministic models without demography. These models are extended by including a random rewiring of susceptible individuals from infectious (and exposed) neighbours. The derived relationship between the exponential growth rate and R0 appears as formally consistent with those derived from homogeneous mixing models, enabling us to measure the transmission potential using the early growth rate of cases. On the other hand, the algebraic expression of R0 for the SEIR pairwise model shows that its value is affected by the average duration of the latent period, in contrast to what happens for the homogeneous mixing SEIR model. Numerical simulations on complex contact networks are performed to check the analytical assumptions and predictions.
Pairwise cobalt doping of boron carbides with cobaltocene
Ignatov, A. Yu.; Losovyj, Ya. B.; Carlson, L.; LaGraffe, D.; Brand, J. I.; Dowben, P. A.
2007-10-15
We have performed Co K-edge x-ray absorption fine structure and x-ray absorption near edge structure measurements of Co-doped plasma enhanced chemical vapor phase deposition (PECVD) grown 'C{sub 2}B{sub 10}H{sub x}' semiconducting boron carbides, using cobaltocene. Cobalt does not dope PECVD grown boron carbides as a random fragment of the cobaltocene source gas. The Co atoms are fivefold boron coordinated (R=2.10{+-}0.02 A) and are chemically bonded to the icosahedral cages of B{sub 10}CH{sub x} or B{sub 9}C{sub 2}H{sub y}. Pairwise Co doping occurs, with the cobalt atoms favoring sites some 5.28{+-}0.02 A apart.
Time-Frequency Analysis Reveals Pairwise Interactions in Insect Swarms
NASA Astrophysics Data System (ADS)
Puckett, James G.; Ni, Rui; Ouellette, Nicholas T.
2015-06-01
The macroscopic emergent behavior of social animal groups is a classic example of dynamical self-organization, and is thought to arise from the local interactions between individuals. Determining these interactions from empirical data sets of real animal groups, however, is challenging. Using multicamera imaging and tracking, we studied the motion of individual flying midges in laboratory mating swarms. By performing a time-frequency analysis of the midge trajectories, we show that the midge behavior can be segmented into two distinct modes: one that is independent and composed of low-frequency maneuvers, and one that consists of higher-frequency nearly harmonic oscillations conducted in synchrony with another midge. We characterize these pairwise interactions, and make a hypothesis as to their biological function.
Hawking radiation and covariant anomalies
Banerjee, Rabin; Kulkarni, Shailesh
2008-01-15
Generalizing the method of Wilczek and collaborators we provide a derivation of Hawking radiation from charged black holes using only covariant gauge and gravitational anomalies. The reliability and universality of the anomaly cancellation approach to Hawking radiation is also discussed.
The sparse matrix transform for covariance estimation and analysis of high dimensional signals.
Cao, Guangzhi; Bachega, Leonardo R; Bouman, Charles A
2011-03-01
Covariance estimation for high dimensional signals is a classically difficult problem in statistical signal analysis and machine learning. In this paper, we propose a maximum likelihood (ML) approach to covariance estimation, which employs a novel non-linear sparsity constraint. More specifically, the covariance is constrained to have an eigen decomposition which can be represented as a sparse matrix transform (SMT). The SMT is formed by a product of pairwise coordinate rotations known as Givens rotations. Using this framework, the covariance can be efficiently estimated using greedy optimization of the log-likelihood function, and the number of Givens rotations can be efficiently computed using a cross-validation procedure. The resulting estimator is generally positive definite and well-conditioned, even when the sample size is limited. Experiments on a combination of simulated data, standard hyperspectral data, and face image sets show that the SMT-based covariance estimates are consistently more accurate than both traditional shrinkage estimates and recently proposed graphical lasso estimates for a variety of different classes and sample sizes. An important property of the new covariance estimate is that it naturally yields a fast implementation of the estimated eigen-transformation using the SMT representation. In fact, the SMT can be viewed as a generalization of the classical fast Fourier transform (FFT) in that it uses "butterflies" to represent an orthonormal transform. However, unlike the FFT, the SMT can be used for fast eigen-signal analysis of general non-stationary signals.
Consistency of crisp and fuzzy pairwise comparison matrix using fuzzy preference programming
NASA Astrophysics Data System (ADS)
Aminuddin, Adam Shariff Adli; Nawawi, Mohd Kamal Mohd
2014-12-01
In this paper, the consistency of crisp pairwise comparison matrix is compared with the fuzzy pairwise comparison matrix of Analytic Network Process (ANP). The fuzzy input in the form of triangular membership function is converted into crisp value using Fuzzy Preference Programming (FPP) method which is implemented using MATLAB. The consistency ratio (CR) for both of the crisp and fuzzy pairwise comparison matrix is calculated using SuperDecisions. Main finding shows that the involvement of fuzzy elements into the decision maker's judgment can reduce the inconsistency of the pairwise comparison matrix compared with the crisp judgment.
Nonreciprocal photonic crystal add-drop filter
Tao, Keyu; Xiao, Jun-Jun; Yin, Xiaobo
2014-11-24
We present a versatile add-drop integrated photonic filter (ADF) consisting of nonreciprocal waveguides in which the propagation of light is restricted in one predetermined direction. With the bus and add/drop waveguides symmetrically coupled through a cavity, the four-port device allows each individual port to add and/or drop a signal of the same frequency. The scheme is general and we demonstrate the nonreciprocal ADF with magneto-optical photonic crystals. The filter is immune to waveguide defects, allowing straightforward implementation of multi-channel ADFs by cascading the four-port designs. The results should find applications in wavelength-division multiplexing and related integrated photonic techniques.
Covariate-free and Covariate-dependent Reliability.
Bentler, Peter M
2016-12-01
Classical test theory reliability coefficients are said to be population specific. Reliability generalization, a meta-analysis method, is the main procedure for evaluating the stability of reliability coefficients across populations. A new approach is developed to evaluate the degree of invariance of reliability coefficients to population characteristics. Factor or common variance of a reliability measure is partitioned into parts that are, and are not, influenced by control variables, resulting in a partition of reliability into a covariate-dependent and a covariate-free part. The approach can be implemented in a single sample and can be applied to a variety of reliability coefficients.
Levy Matrices and Financial Covariances
NASA Astrophysics Data System (ADS)
Burda, Zdzislaw; Jurkiewicz, Jerzy; Nowak, Maciej A.; Papp, Gabor; Zahed, Ismail
2003-10-01
In a given market, financial covariances capture the intra-stock correlations and can be used to address statistically the bulk nature of the market as a complex system. We provide a statistical analysis of three SP500 covariances with evidence for raw tail distributions. We study the stability of these tails against reshuffling for the SP500 data and show that the covariance with the strongest tails is robust, with a spectral density in remarkable agreement with random Lévy matrix theory. We study the inverse participation ratio for the three covariances. The strong localization observed at both ends of the spectral density is analogous to the localization exhibited in the random Lévy matrix ensemble. We discuss two competitive mechanisms responsible for the occurrence of an extensive and delocalized eigenvalue at the edge of the spectrum: (a) the Lévy character of the entries of the correlation matrix and (b) a sort of off-diagonal order induced by underlying inter-stock correlations. (b) can be destroyed by reshuffling, while (a) cannot. We show that the stocks with the largest scattering are the least susceptible to correlations, and likely candidates for the localized states. We introduce a simple model for price fluctuations which captures behavior of the SP500 covariances. It may be of importance for assets diversification.
Shift-and-add for astronomical imaging
NASA Technical Reports Server (NTRS)
Ribak, Erez; Hege, E. Keith; Strobel, Nicolas V.; Christou, Julian C.
1989-01-01
Diffraction-limited astronomical images have been obtained utilizing a variant of the shift-and-add method. It is shown that the matched filter approach for extending the weighted shift-and-add method reduces specklegrams from extended objects and from an object dominated by photon noise. The method is aberration-insensitive and yields very high dynamic range results. The iterative method for arriving at the matched filter does not automatically converge in the case of photon-noisy specklegrams for objects with more than one maximum.
A pairwise alignment algorithm which favors clusters of blocks.
Nédélec, Elodie; Moncion, Thomas; Gassiat, Elisabeth; Bossard, Bruno; Duchateau-Nguyen, Guillemette; Denise, Alain; Termier, Michel
2005-01-01
Pairwise sequence alignments aim to decide whether two sequences are related and, if so, to exhibit their related domains. Recent works have pointed out that a significant number of true homologous sequences are missed when using classical comparison algorithms. This is the case when two homologous sequences share several little blocks of homology, too small to lead to a significant score. On the other hand, classical alignment algorithms, when detecting homologies, may fail to recognize all the significant biological signals. The aim of the paper is to give a solution to these two problems. We propose a new scoring method which tends to increase the score of an alignment when "blocks" are detected. This so-called Block-Scoring algorithm, which makes use of dynamic programming, is worth being used as a complementary tool to classical exact alignments methods. We validate our approach by applying it on a large set of biological data. Finally, we give a limit theorem for the score statistics of the algorithm.
ERIC Educational Resources Information Center
Sari, Halil Ibrahim; Huggins, Anne Corinne
2015-01-01
This study compares two methods of defining groups for the detection of differential item functioning (DIF): (a) pairwise comparisons and (b) composite group comparisons. We aim to emphasize and empirically support the notion that the choice of pairwise versus composite group definitions in DIF is a reflection of how one defines fairness in DIF…
Equating a Large-Scale Writing Assessment Using Pairwise Comparisons of Performances
ERIC Educational Resources Information Center
Humphry, Stephen M.; McGrane, Joshua A.
2015-01-01
This paper presents a method for equating writing assessments using pairwise comparisons which does not depend upon conventional common-person or common-item equating designs. Pairwise comparisons have been successfully applied in the assessment of open-ended tasks in English and other areas such as visual art and philosophy. In this paper,…
A class of covariate-dependent spatiotemporal covariance functions.
Reich, Brian J; Eidsvik, Jo; Guindani, Michele; Nail, Amy J; Schmidt, Alexandra M
2011-12-01
In geostatistics, it is common to model spatially distributed phenomena through an underlying stationary and isotropic spatial process. However, these assumptions are often untenable in practice because of the influence of local effects in the correlation structure. Therefore, it has been of prolonged interest in the literature to provide flexible and effective ways to model non-stationarity in the spatial effects. Arguably, due to the local nature of the problem, we might envision that the correlation structure would be highly dependent on local characteristics of the domain of study, namely the latitude, longitude and altitude of the observation sites, as well as other locally defined covariate information. In this work, we provide a flexible and computationally feasible way for allowing the correlation structure of the underlying processes to depend on local covariate information. We discuss the properties of the induced covariance functions and discuss methods to assess its dependence on local covariate information by means of a simulation study and the analysis of data observed at ozone-monitoring stations in the Southeast United States.
REFINING GENETICALLY INFERRED RELATIONSHIPS USING TREELET COVARIANCE SMOOTHING1
Crossett, Andrew; Lee, Ann B.; Klei, Lambertus; Devlin, Bernie; Roeder, Kathryn
2013-01-01
Recent technological advances coupled with large sample sets have uncovered many factors underlying the genetic basis of traits and the predisposition to complex disease, but much is left to discover. A common thread to most genetic investigations is familial relationships. Close relatives can be identified from family records, and more distant relatives can be inferred from large panels of genetic markers. Unfortunately these empirical estimates can be noisy, especially regarding distant relatives. We propose a new method for denoising genetically—inferred relationship matrices by exploiting the underlying structure due to hierarchical groupings of correlated individuals. The approach, which we call Treelet Covariance Smoothing, employs a multiscale decomposition of covariance matrices to improve estimates of pairwise relationships. On both simulated and real data, we show that smoothing leads to better estimates of the relatedness amongst distantly related individuals. We illustrate our method with a large genome-wide association study and estimate the “heritability” of body mass index quite accurately. Traditionally heritability, defined as the fraction of the total trait variance attributable to additive genetic effects, is estimated from samples of closely related individuals using random effects models. We show that by using smoothed relationship matrices we can estimate heritability using population-based samples. Finally, while our methods have been developed for refining genetic relationship matrices and improving estimates of heritability, they have much broader potential application in statistics. Most notably, for error-in-variables random effects models and settings that require regularization of matrices with block or hierarchical structure. PMID:24587841
REFINING GENETICALLY INFERRED RELATIONSHIPS USING TREELET COVARIANCE SMOOTHING.
Crossett, Andrew; Lee, Ann B; Klei, Lambertus; Devlin, Bernie; Roeder, Kathryn
2013-06-27
Recent technological advances coupled with large sample sets have uncovered many factors underlying the genetic basis of traits and the predisposition to complex disease, but much is left to discover. A common thread to most genetic investigations is familial relationships. Close relatives can be identified from family records, and more distant relatives can be inferred from large panels of genetic markers. Unfortunately these empirical estimates can be noisy, especially regarding distant relatives. We propose a new method for denoising genetically-inferred relationship matrices by exploiting the underlying structure due to hierarchical groupings of correlated individuals. The approach, which we call Treelet Covariance Smoothing, employs a multiscale decomposition of covariance matrices to improve estimates of pairwise relationships. On both simulated and real data, we show that smoothing leads to better estimates of the relatedness amongst distantly related individuals. We illustrate our method with a large genome-wide association study and estimate the "heritability" of body mass index quite accurately. Traditionally heritability, defined as the fraction of the total trait variance attributable to additive genetic effects, is estimated from samples of closely related individuals using random effects models. We show that by using smoothed relationship matrices we can estimate heritability using population-based samples. Finally, while our methods have been developed for refining genetic relationship matrices and improving estimates of heritability, they have much broader potential application in statistics. Most notably, for error-in-variables random effects models and settings that require regularization of matrices with block or hierarchical structure.
76 FR 47240 - ``Add Us In'' Initiative
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-04
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF LABOR Office of Disability Employment Policy ``Add Us In'' Initiative AGENCY: Office of Disability Employment Policy, Department of Labor. Announcement Type: New Notice of Availability of Funds and Solicitation for Grant Applications (SGA) for...
75 FR 45164 - ``Add Us In'' Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-02
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF LABOR Office of the Assistant Secretary for Office of Disability Employment Policy ``Add Us In'' Program AGENCY: Office of Disability Employment Policy, Department of Labor. Announcement Type: New Notice of Availability of Funds and Solicitation for...
Shift Would Add Burden on Principals
ERIC Educational Resources Information Center
Killion, Joellen
2004-01-01
In this article, the author describes how the transition from district-centered to school-based staff development can add to the burden on principals. This column first presents a brief review of the efforts of Braxton Hinsdale, a staff development director who advocated for moving professional development resources from the district level to the…
Selected Perspectives on ADD and ADHD.
ERIC Educational Resources Information Center
Porter, Louise
1997-01-01
Offers an overview of ADD and ADHD, their causes and long-term prognoses, including the complexities of the conditions, the incomplete knowledge about them, and the difficulties of diagnosis during early childhood. Summarizes assessment and treatment options and concludes that the conditions have so many secondary effects that designing an…
Educational Interventions for Students with ADD.
ERIC Educational Resources Information Center
Salend, Spencer J.; Elhoweris, Hala; van Garderen, Delinda
2003-01-01
Principles of educational interventions for students with attention deficit disorder (ADD) include: (1) giving complete and thorough directions; (2) individualizing in-class and homework assignments; (3) motivating students; (4) promoting active responding and monitoring understanding; (5) employing content enhancements; (6) offering learning…
Probabilistic pairwise Markov models: application to prostate cancer detection
NASA Astrophysics Data System (ADS)
Monaco, James; Tomaszewski, John E.; Feldman, Michael D.; Moradi, Mehdi; Mousavi, Parvin; Boag, Alexander; Davidson, Chris; Abolmaesumi, Purang; Madabhushi, Anant
2009-02-01
Markov Random Fields (MRFs) provide a tractable means for incorporating contextual information into a Bayesian framework. This contextual information is modeled using multiple local conditional probability density functions (LCPDFs) which the MRF framework implicitly combines into a single joint probability density function (JPDF) that describes the entire system. However, only LCPDFs of certain functional forms are consistent, meaning they reconstitute a valid JPDF. These forms are specified by the Gibbs-Markov equivalence theorem which indicates that the JPDF, and hence the LCPDFs, should be representable as a product of potential functions (i.e. Gibbs distributions). Unfortunately, potential functions are mathematical abstractions that lack intuition; and consequently, constructing LCPDFs through their selection becomes an ad hoc procedure, usually resulting in generic and/or heuristic models. In this paper we demonstrate that under certain conditions the LCDPFs can be formulated in terms of quantities that are both meaningful and descriptive: probability distributions. Using probability distributions instead of potential functions enables us to construct consistent LCPDFs whose modeling capabilities are both more intuitive and expansive than typical MRF models. As an example, we compare the efficacy of our so-called probabilistic pairwise Markov models (PPMMs) to the prevalent Potts model by incorporating both into a novel computer aided diagnosis (CAD) system for detecting prostate cancer in whole-mount histological sections. Using the Potts model the CAD system is able to detection cancerous glands with a specificity of 0.82 and sensitivity of 0.71; its area under the receiver operator characteristic (AUC) curve is 0.83. If instead the PPMM model is employed the sensitivity (specificity is held fixed) and AUC increase to 0.77 and 0.87.
Covariation Neglect among Novice Investors
ERIC Educational Resources Information Center
Hedesstrom, Ted Martin; Svedsater, Henrik; Garling, Tommy
2006-01-01
In 4 experiments, undergraduates made hypothetical investment choices. In Experiment 1, participants paid more attention to the volatility of individual assets than to the volatility of aggregated portfolios. The results of Experiment 2 show that most participants diversified even when this increased risk because of covariation between the returns…
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n" setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Cross-Section Covariance Data Processing with the AMPX Module PUFF-IV
Wiarda, Dorothea; Leal, Luiz C; Dunn, Michael E
2011-01-01
The ENDF community is endeavoring to release an updated version of the ENDF/B-VII library (ENDF/B-VII.1). In the new release several new evaluations containing covariance information have been added, as the community strives to add covariance information for use in programs like the TSUNAMI (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation) sequence of SCALE (Ref 1). The ENDF/B formatted files are processed into libraries to be used in transport calculations using the AMPX code system (Ref 2) or the NJOY code system (Ref 3). Both codes contain modules to process covariance matrices: PUFF-IV for AMPX and ERRORR in the case of NJOY. While the cross section processing capability between the two code systems has been widely compared, the same is not true for the covariance processing. This paper compares the results for the two codes using the pre-release version of ENDF/B-VII.1.
Understanding covariate shift in model performance
McGaughey, Georgia; Walters, W. Patrick; Goldman, Brian
2016-01-01
Three (3) different methods (logistic regression, covariate shift and k-NN) were applied to five (5) internal datasets and one (1) external, publically available dataset where covariate shift existed. In all cases, k-NN’s performance was inferior to either logistic regression or covariate shift. Surprisingly, there was no obvious advantage for using covariate shift to reweight the training data in the examined datasets. PMID:27803797
Are Maxwell's equations Lorentz-covariant?
NASA Astrophysics Data System (ADS)
Redžić, D. V.
2017-01-01
It is stated in many textbooks that Maxwell's equations are manifestly covariant when written down in tensorial form. We recall that tensorial form of Maxwell's equations does not secure their tensorial contents; they become covariant by postulating certain transformation properties of field functions. That fact should be stressed when teaching about the covariance of Maxwell's equations.
Lorentz-covariant dissipative Lagrangian systems
NASA Technical Reports Server (NTRS)
Kaufman, A. N.
1985-01-01
The concept of dissipative Hamiltonian system is converted to Lorentz-covariant form, with evolution generated jointly by two scalar functionals, the Lagrangian action and the global entropy. A bracket formulation yields the local covariant laws of energy-momentum conservation and of entropy production. The formalism is illustrated by a derivation of the covariant Landau kinetic equation.
Generalization of Pairwise Models to non-Markovian Epidemics on Networks
NASA Astrophysics Data System (ADS)
Kiss, Istvan Z.; Röst, Gergely; Vizi, Zsolt
2015-08-01
In this Letter, a generalization of pairwise models to non-Markovian epidemics on networks is presented. For the case of infectious periods of fixed length, the resulting pairwise model is a system of delay differential equations, which shows excellent agreement with results based on stochastic simulations. Furthermore, we analytically compute a new R0 -like threshold quantity and an analytical relation between this and the final epidemic size. Additionally, we show that the pairwise model and the analytic results can be generalized to an arbitrary distribution of the infectious times, using integro-differential equations, and this leads to a general expression for the final epidemic size. By showing the rigorous link between non-Markovian dynamics and pairwise delay differential equations, we provide the framework for a more systematic understanding of non-Markovian dynamics.
NASA Astrophysics Data System (ADS)
Bianchi, Davide; Chiesa, Matteo; Guzzo, Luigi
2016-10-01
As a step towards a more accurate modelling of redshift-space distortions (RSD) in galaxy surveys, we develop a general description of the probability distribution function of galaxy pairwise velocities within the framework of the so-called streaming model. For a given galaxy separation , such function can be described as a superposition of virtually infinite local distributions. We characterize these in terms of their moments and then consider the specific case in which they are Gaussian functions, each with its own mean μ and variance σ2. Based on physical considerations, we make the further crucial assumption that these two parameters are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. Tests using numerical simulations explicitly show that with this compact description one can correctly model redshift-space distorsions on all scales, fully capturing the overall linear and nonlinear dynamics of the galaxy flow at different separations. In particular, we naturally obtain Gaussian/exponential, skewed/unskewed distribution functions, depending on separation as observed in simulations and data. Also, the recently proposed single-Gaussian description of redshift-space distortions is included in this model as a limiting case, when the bivariate Gaussian is collapsed to a two-dimensional Dirac delta function. More work is needed, but these results indicate a very promising path to make definitive progress in our program to improve RSD estimators.
Roelens, Baptiste; Schvarzstein, Mara; Villeneuve, Anne M.
2015-01-01
Meiotic chromosome segregation requires pairwise association between homologs, stabilized by the synaptonemal complex (SC). Here, we investigate factors contributing to pairwise synapsis by investigating meiosis in polyploid worms. We devised a strategy, based on transient inhibition of cohesin function, to generate polyploid derivatives of virtually any Caenorhabditis elegans strain. We exploited this strategy to investigate the contribution of recombination to pairwise synapsis in tetraploid and triploid worms. In otherwise wild-type polyploids, chromosomes first sort into homolog groups, then multipartner interactions mature into exclusive pairwise associations. Pairwise synapsis associations still form in recombination-deficient tetraploids, confirming a propensity for synapsis to occur in a strictly pairwise manner. However, the transition from multipartner to pairwise association was perturbed in recombination-deficient triploids, implying a role for recombination in promoting this transition when three partners compete for synapsis. To evaluate the basis of synapsis partner preference, we generated polyploid worms heterozygous for normal sequence and rearranged chromosomes sharing the same pairing center (PC). Tetraploid worms had no detectable preference for identical partners, indicating that PC-adjacent homology drives partner choice in this context. In contrast, triploid worms exhibited a clear preference for identical partners, indicating that homology outside the PC region can influence partner choice. Together, our findings, suggest a two-phase model for C. elegans synapsis: an early phase, in which initial synapsis interactions are driven primarily by recombination-independent assessment of homology near PCs and by a propensity for pairwise SC assembly, and a later phase in which mature synaptic interactions are promoted by recombination. PMID:26500263
ERIC Educational Resources Information Center
Stark, Stephen; Chernyshenko, Oleksandr S.; Drasgow, Fritz
2005-01-01
This article proposes an item response theory (IRT) approach to constructing and scoring multidimensional pairwise preference items. Individual statements are administered and calibrated using a unidimensional single-stimulus model. Tests are created by combining multidimensional items with a small number of unidimensional pairings needed to…
Covariance Evaluation Methodology for Neutron Cross Sections
Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.
2008-09-01
We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.
Phase-covariant quantum benchmarks
Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.
2009-05-15
We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.
User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 1: General ADD code description
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Hankins, G. B., Jr.; Edwards, D. E.
1982-01-01
This User's Manual contains a complete description of the computer codes known as the AXISYMMETRIC DIFFUSER DUCT code or ADD code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts.
User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 3: ADD code coordinate generator
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Hankins, G. B., Jr.; Edwards, D. E.
1982-01-01
This User's Manual contains a complete description of the computer codes known as the Axisymmetric Diffuser Duct (ADD) code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts.
Using Joint Interviews to Add Analytic Value.
Polak, Louisa; Green, Judith
2016-10-01
Joint interviewing has been frequently used in health research, and is the subject of a growing methodological literature. We review this literature, and build on it by drawing on a case study of how people make decisions about taking statins. This highlights two ways in which a dyadic approach to joint interviewing can add analytic value compared with individual interviewing. First, the analysis of interaction within joint interviews can help to explicate tacit knowledge and to illuminate the range of often hard-to-access resources that are drawn upon in making decisions. Second, joint interviews mitigate some of the weaknesses of interviewing as a method for studying practices; we offer a cautious defense of the often-tacit assumption that the "naturalness" of joint interviews strengthens their credibility as the basis for analytic inferences. We suggest that joint interviews are a particularly appropriate method for studying complex shared practices such as making health decisions.
Independents add gas reserves, forego romance
Gill, D.
1981-08-01
Incentive pricing for low-permeability reservoirs and tax advantages for drilling them are 2 big reasons why more independents may start making a special effort to add gas reserves to their inventories. If so, it will be a change from past practices, which saw independents build up big gas positions by circumstance rather than by intention. There are always major refiners ready and willing to buy whole crude oil reservoirs from small producers, but purchasers willing to take gas fields in a single investment are few and far between. Lower-than-normal return on equity during the first 20 years, plus the heavy front-end cost of a frac necessary to produce the tight gas might dissuade independents from drilling tight gas sands, but those liabilities are offset by the higher price tight gas gets and the peculiar tax advantages of exploring for it that make a nice fit with the small operator's way of doing business.
Relativistic covariance of Ohm's law
NASA Astrophysics Data System (ADS)
Starke, R.; Schober, G. A. H.
2016-04-01
The derivation of Lorentz-covariant generalizations of Ohm's law has been a long-term issue in theoretical physics with deep implications for the study of relativistic effects in optical and atomic physics. In this article, we propose an alternative route to this problem, which is motivated by the tremendous progress in first-principles materials physics in general and ab initio electronic structure theory in particular. We start from the most general, Lorentz-covariant first-order response law, which is written in terms of the fundamental response tensor χμ ν relating induced four-currents to external four-potentials. By showing the equivalence of this description to Ohm's law, we prove the validity of Ohm's law in every inertial frame. We further use the universal relation between χμ ν and the microscopic conductivity tensor σkℓ to derive a fully relativistic transformation law for the latter, which includes all effects of anisotropy and relativistic retardation. In the special case of a constant, scalar conductivity, this transformation law can be used to rederive a standard textbook generalization of Ohm's law.
Children with Attention Deficit Disorders. ADD Fact Sheet.
ERIC Educational Resources Information Center
Parker, Harvey C.
This fact sheet summarizes basic information on Attention Deficit Disorders (ADD), including prevalence and characteristics, causes, identification, treatment, outcomes, and suggestions. Children with ADD comprise approximately 3-5 percent of the school age population, with boys significantly outnumbering girls. Of 14 characteristics of ADD, the…
Pairwise diversity and tMRCA as potential markers for HIV infection recency
Moyo, Sikhulile; Wilkinson, Eduan; Vandormael, Alain; Wang, Rui; Weng, Jia; Kotokwe, Kenanao P.; Gaseitsiwe, Simani; Musonda, Rosemary; Makhema, Joseph; Essex, Max; Engelbrecht, Susan; de Oliveira, Tulio; Novitsky, Vladimir
2017-01-01
Abstract Intrahost human immunodeficiency virus (HIV)-1 diversity increases linearly over time. We assessed the extent to which mean pairwise distances and the time to the most recent common ancestor (tMRCA) inferred from intrahost HIV-1C env sequences were associated with the estimated time of HIV infection. Data from a primary HIV-1C infection study in Botswana were used for this analysis (N = 42). A total of 2540 HIV-1C env gp120 variable loop region 1 to conserved region 5 (V1C5) of the HIV-1 envelope gp120 viral sequences were generated by single genome amplification and sequencing, with an average of 61 viral sequences per participant and 11 sequences per time point per participant. Raw pairwise distances were calculated for each time point and participant using the ape package in R software. The tMRCA was estimated using phylogenetic inference implemented in Bayesian Evolutionary Analysis by Sampling Trees v1.8.2. Pairwise distances and tMRCA were significantly associated with the estimated time since HIV infection (both P < 0.001). Taking into account multiplicity of HIV infection strengthened these associations. HIV-1C env-based pairwise distances and tMRCA can be used as potential markers for HIV recency. However, the tMRCA estimates demonstrated no advantage over the pairwise distances estimates. PMID:28178146
Non-pairwise additivity of the leading-order dispersion energy.
Hollett, Joshua W
2015-02-28
The leading-order (i.e., dipole-dipole) dispersion energy is calculated for one-dimensional (1D) and two-dimensional (2D) infinite lattices, and an infinite 1D array of infinitely long lines, of doubly occupied locally harmonic wells. The dispersion energy is decomposed into pairwise and non-pairwise additive components. By varying the force constant and separation of the wells, the non-pairwise additive contribution to the dispersion energy is shown to depend on the overlap of density between neighboring wells. As well separation is increased, the non-pairwise additivity of the dispersion energy decays. The different rates of decay for 1D and 2D lattices of wells is explained in terms of a Jacobian effect that influences the number of nearest neighbors. For an array of infinitely long lines of wells spaced 5 bohrs apart, and an inter-well spacing of 3 bohrs within a line, the non-pairwise additive component of the leading-order dispersion energy is -0.11 kJ mol(-1) well(-1), which is 7% of the total. The polarizability of the wells and the density overlap between them are small in comparison to that of the atomic densities that arise from the molecular density partitioning used in post-density-functional theory (DFT) damped dispersion corrections, or DFT-D methods. Therefore, the nonadditivity of the leading-order dispersion observed here is a conservative estimate of that in molecular clusters.
COVARIANCE ASSISTED SCREENING AND ESTIMATION.
Ke, By Tracy; Jin, Jiashun; Fan, Jianqing
2014-11-01
Consider a linear model Y = X β + z, where X = Xn,p and z ~ N(0, In ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X'X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening, and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.
COVARIANCE ASSISTED SCREENING AND ESTIMATION
Ke, By Tracy; Jin, Jiashun; Fan, Jianqing
2014-01-01
Consider a linear model Y = X β + z, where X = Xn,p and z ~ N(0, In). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X′X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening, and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model. PMID:25541567
Covariant density functional theory: The role of the pion
Lalazissis, G. A.; Karatzikos, S.; Serra, M.; Otsuka, T.; Ring, P.
2009-10-15
We investigate the role of the pion in covariant density functional theory. Starting from conventional relativistic mean field (RMF) theory with a nonlinear coupling of the {sigma} meson and without exchange terms we add pions with a pseudovector coupling to the nucleons in relativistic Hartree-Fock approximation. In order to take into account the change of the pion field in the nuclear medium the effective coupling constant of the pion is treated as a free parameter. It is found that the inclusion of the pion to this sort of density functionals does not destroy the overall description of the bulk properties by RMF. On the other hand, the noncentral contribution of the pion (tensor coupling) does have effects on single particle energies and on binding energies of certain nuclei.
Computation of transform domain covariance matrices
NASA Technical Reports Server (NTRS)
Fino, B. J.; Algazi, V. R.
1975-01-01
It is often of interest in applications to compute the covariance matrix of a random process transformed by a fast unitary transform. Here, the recursive definition of fast unitary transforms is used to derive recursive relations for the covariance matrices of the transformed process. These relations lead to fast methods of computation of covariance matrices and to substantial reductions of the number of arithmetic operations required.
Shrinkage approach for EEG covariance matrix estimation.
Beltrachini, Leandro; von Ellenrieder, Nicolas; Muravchik, Carlos H
2010-01-01
We present a shrinkage estimator for the EEG spatial covariance matrix of the background activity. We show that such an estimator has some advantages over the maximum likelihood and sample covariance estimators when the number of available data to carry out the estimation is low. We find sufficient conditions for the consistency of the shrinkage estimators and results concerning their numerical stability. We compare several shrinkage schemes and show how to improve the estimator by incorporating known structure of the covariance matrix.
Frailty models with missing covariates.
Herring, Amy H; Ibrahim, Joseph G; Lipsitz, Stuart R
2002-03-01
We present a method for estimating the parameters in random effects models for survival data when covariates are subject to missingness. Our method is more general than the usual frailty model as it accommodates a wide range of distributions for the random effects, which are included as an offset in the linear predictor in a manner analogous to that used in generalized linear mixed models. We propose using a Monte Carlo EM algorithm along with the Gibbs sampler to obtain parameter estimates. This method is useful in reducing the bias that may be incurred using complete-case methods in this setting. The methodology is applied to data from Eastern Cooperative Oncology Group melanoma clinical trials in which observations were believed to be clustered and several tumor characteristics were not always observed.
Lorentz covariant {kappa}-Minkowski spacetime
DaPbrowski, Ludwik; Godlinski, Michal; Piacitelli, Gherardo
2010-06-15
In recent years, different views on the interpretation of Lorentz covariance of noncommuting coordinates have been discussed. By a general procedure, we construct the minimal canonical central covariantization of the {kappa}-Minkowski spacetime. Here, undeformed Lorentz covariance is implemented by unitary operators, in the presence of two dimensionful parameters. We then show that, though the usual {kappa}-Minkowski spacetime is covariant under deformed (or twisted) Lorentz action, the resulting framework is equivalent to taking a noncovariant restriction of the covariantized model. We conclude with some general comments on the approach of deformed covariance.
Balancing continuous covariates based on Kernel densities.
Ma, Zhenjun; Hu, Feifang
2013-03-01
The balance of important baseline covariates is essential for convincing treatment comparisons. Stratified permuted block design and minimization are the two most commonly used balancing strategies, both of which require the covariates to be discrete. Continuous covariates are typically discretized in order to be included in the randomization scheme. But breakdown of continuous covariates into subcategories often changes the nature of the covariates and makes distributional balance unattainable. In this article, we propose to balance continuous covariates based on Kernel density estimations, which keeps the continuity of the covariates. Simulation studies show that the proposed Kernel-Minimization can achieve distributional balance of both continuous and categorical covariates, while also keeping the group size well balanced. It is also shown that the Kernel-Minimization is less predictable than stratified permuted block design and minimization. Finally, we apply the proposed method to redesign the NINDS trial, which has been a source of controversy due to imbalance of continuous baseline covariates. Simulation shows that imbalances such as those observed in the NINDS trial can be generally avoided through the implementation of the new method.
Tartakovsky, Alexandre M.; Panchenko, Alexander
2016-01-01
We present a novel formulation of the Pairwise Force Smoothed Particle Hydrodynamics Model (PF-SPH) and use it to simulate two- and three-phase flows in bounded domains. In the PF-SPH model, the Navier-Stokes equations are discretized with the Smoothed Particle Hydrodynamics (SPH) method and the Young-Laplace boundary condition at the fluid-fluid interface and the Young boundary condition at the fluid-fluid-solid interface are replaced with pairwise forces added into the Navier-Stokes equations. We derive a relationship between the parameters in the pairwise forces and the surface tension and static contact angle. Next, we demonstrate the accuracy of the model under static and dynamic conditions. Finally, to demonstrate the capabilities and robustness of the model we use it to simulate flow of three fluids in a porous material.
NASA Astrophysics Data System (ADS)
Tartakovsky, Alexandre M.; Panchenko, Alexander
2016-01-01
We present a novel formulation of the Pairwise Force Smoothed Particle Hydrodynamics (PF-SPH) model and use it to simulate two- and three-phase flows in bounded domains. In the PF-SPH model, the Navier-Stokes equations are discretized with the Smoothed Particle Hydrodynamics (SPH) method, and the Young-Laplace boundary condition at the fluid-fluid interface and the Young boundary condition at the fluid-fluid-solid interface are replaced with pairwise forces added into the Navier-Stokes equations. We derive a relationship between the parameters in the pairwise forces and the surface tension and static contact angle. Next, we demonstrate the model's accuracy under static and dynamic conditions. Finally, we use the Pf-SPH model to simulate three phase flow in a porous medium.
Pairwise FCM based feature weighting for improved classification of vertebral column disorders.
Unal, Yavuz; Polat, Kemal; Erdinc Kocer, H
2014-03-01
In this paper, an innovative data pre-processing method to improve the classification performance and to determine automatically the vertebral column disorders including disk hernia (DH), spondylolisthesis (SL) and normal (NO) groups has been proposed. In the classification of vertebral column disorders' dataset with three classes, a pairwise fuzzy C-means (FCM) based feature weighting method has been proposed. In this method, first of all, the vertebral column dataset has been grouped as pairwise (DH-SL, DH-NO, and SL-NO) and then these pairwise groups have been weighted using a FCM based feature set. These weighted groups have been classified using classifier algorithms including multilayer perceptron (MLP), k-nearest neighbor (k-NN), Naive Bayes, and support vector machine (SVM). The general classification performance has been obtained by averaging of classification accuracies obtained from pairwise classifier algorithms. To evaluate the performance of the proposed method, the classification accuracy, sensitivity, specificity, ROC curves, and f-measure have been used. Without the proposed feature weighting, the obtained f-measure values were 0.7738 for MLP classifier, 0.7021 for k-NN, 0.7263 for Naive Bayes, and 0.7298 for SVM classifier algorithms in the classification of vertebral column disorders' dataset with three classes. With the pairwise fuzzy C-means based feature weighting method, the obtained f-measure values were 0.9509 for MLP, 0.9313 for k-NN, 0.9603 for Naive Bayes, and 0.9468 for SVM classifier algorithms. The experimental results demonstrated that the proposed pairwise fuzzy C-means based feature weighting method is robust and effective in the classification of vertebral column disorders' dataset. In the future, this method could be used confidently for medical datasets with more classes.
SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.
Muhire, Brejnev Muhizi; Varsani, Arvind; Martin, Darren Patrick
2014-01-01
The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV). There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT), a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms).
Effects of pairwise versus many-body forces on high-stress plastic deformation
NASA Astrophysics Data System (ADS)
Holian, B. L.; Voter, A. F.; Wagner, N. J.; Ravelo, R. J.; Chen, S. P.; Hoover, W. G.; Hoover, C. G.; Hammerberg, J. E.; Dontje, T. D.
1991-03-01
We propose a model embedded-atom (many-body) potential and test it against an effective, density-independent, pairwise-additive potential in a variety of nonequilibrium molecular-dynamics simulations of plastic deformation under high stress. Even though both kinds of interactions have nearly the same equilibrium equation of state, the defect energies (i.e., vacancy formation and surface energies) are quite different. As a result, we observe significant qualitative differences in flow behavior between systems characterized by purely pairwise interactions versus higher-order many-body forces.
A Covariance Generation Methodology for Fission Product Yields
NASA Astrophysics Data System (ADS)
Terranova, N.; Serot, O.; Archier, P.; Vallet, V.; De Saint Jean, C.; Sumini, M.
2016-03-01
Recent safety and economical concerns for modern nuclear reactor applications have fed an outstanding interest in basic nuclear data evaluation improvement and completion. It has been immediately clear that the accuracy of our predictive simulation models was strongly affected by our knowledge on input data. Therefore strong efforts have been made to improve nuclear data and to generate complete and reliable uncertainty information able to yield proper uncertainty propagation on integral reactor parameters. Since in modern nuclear data banks (such as JEFF-3.1.1 and ENDF/BVII.1) no correlations for fission yields are given, in the present work we propose a covariance generation methodology for fission product yields. The main goal is to reproduce the existing European library and to add covariance information to allow proper uncertainty propagation in depletion and decay heat calculations. To do so, we adopted the Generalized Least Square Method (GLSM) implemented in CONRAD (COde for Nuclear Reaction Analysis and Data assimilation), developed at CEA-Cadarache. Theoretical values employed in the Bayesian parameter adjustment are delivered thanks to a convolution of different models, representing several quantities in fission yield calculations: the Brosa fission modes for pre-neutron mass distribution, a simplified Gaussian model for prompt neutron emission probability, theWahl systematics for charge distribution and the Madland-England model for the isomeric ratio. Some results will be presented for the thermal fission of U-235, Pu-239 and Pu-241.
NASA Astrophysics Data System (ADS)
Teo, Steven L. H.; Botsford, Louis W.; Hastings, Alan
2009-12-01
One of the motivations of the GLOBEC Northeast Pacific program is to understand the apparent inverse relationship between the increase in salmon catches in the Gulf of Alaska and concurrent declines in the California Current System (CCS). We therefore used coded wire tag (CWT) data to examine the spatial and temporal patterns of covariability in the survival of hatchery coho salmon along the coast from California to southeast Alaska between release years 1980 and 2004. There is substantial covariability in coho salmon survival between neighboring regions along the coast, and there is clear evidence for increased covariability within two main groups - a northern and southern group. The dividing line between the groups lies approximately at the north end of Vancouver Island. However, CWT survivals do not support inverse covariability in hatchery coho salmon survival between southeast Alaska and the CCS over this 25 year time span. Instead, the hatchery coho survival in southeast Alaska is relatively uncorrelated with coho survival in the California Current System on inter-annual time scales. The 50% correlation and e-folding scales (distances at which magnitude of correlations decreases to 50% and e -1 (32.8%), respectively) of pairwise correlations between individual hatcheries were 150 and 217 km, which are smaller than that reported for sockeye, pink, and chum salmon. The 50% correlation scale of coho salmon is also substantially smaller than those reported for upwelling indices and sea surface temperature. There are also periods of 5-10 years with high covariability between adjacent regions on the scale of hundreds of km, which may be of biological and physical significance.
Covariance Structure Analysis of Ordinal Ipsative Data.
ERIC Educational Resources Information Center
Chan, Wai; Bentler, Peter M.
1998-01-01
Proposes a two-stage estimation method for the analysis of covariance structure models with ordinal ipsative data (OID). A goodness-of-fit statistic is given for testing the hypothesized covariance structure matrix, and simulation results show that the method works well with a large sample. (SLD)
Quality Quantification of Evaluated Cross Section Covariances
Varet, S.; Dossantos-Uzarralde, P.
2015-01-15
Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the {sup 85}Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations.
Group Theory of Covariant Harmonic Oscillators
ERIC Educational Resources Information Center
Kim, Y. S.; Noz, Marilyn E.
1978-01-01
A simple and concrete example for illustrating the properties of noncompact groups is presented. The example is based on the covariant harmonic-oscillator formalism in which the relativistic wave functions carry a covariant-probability interpretation. This can be used in a group theory course for graduate students who have some background in…
Position Error Covariance Matrix Validation and Correction
NASA Technical Reports Server (NTRS)
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
NASA Astrophysics Data System (ADS)
Bianchi, Davide; Chiesa, Matteo; Guzzo, Luigi
2015-01-01
As a step towards a more accurate modelling of redshift-space distortions (RSD) in galaxy surveys, we develop a general description of the probability distribution function of galaxy pairwise velocities within the framework of the so-called streaming model. For a given galaxy separation r, such function can be described as a superposition of virtually infinite local distributions. We characterize these in terms of their moments and then consider the specific case in which they are Gaussian functions, each with its own mean μ and dispersion σ. Based on physical considerations, we make the further crucial assumption that these two parameters are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. Tests using numerical simulations explicitly show that with this compact description one can correctly model redshift-space distortions on all scales, fully capturing the overall linear and non-linear dynamics of the galaxy flow at different separations. In particular, we naturally obtain Gaussian/exponential, skewed/unskewed distribution functions, depending on separation as observed in simulations and data. Also, the recently proposed single-Gaussian description of RSD is included in this model as a limiting case, when the bivariate Gaussian is collapsed to a two-dimensional Dirac delta function. We also show how this description naturally allows for the Taylor expansion of 1 + ξS(s) around 1 + ξR(r), which leads to the Kaiser linear formula when truncated to second order, explicating its connection with the moments of the velocity distribution functions. More work is needed, but these results indicate a very promising path to make definitive progress in our programme to improve RSD estimators.
Adjoints and Low-rank Covariance Representation
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.
2000-01-01
Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.
Genetic diversity and species diversity of stream fishes covary across a land-use gradient.
Blum, Michael J; Bagley, Mark J; Walters, David M; Jackson, Suzanne A; Daniel, F Bernard; Chaloud, Deborah J; Cade, Brian S
2012-01-01
Genetic diversity and species diversity are expected to covary according to area and isolation, but may not always covary with environmental heterogeneity. In this study, we examined how patterns of genetic and species diversity in stream fishes correspond to local and regional environmental conditions. To do so, we compared population size, genetic diversity and divergence in central stonerollers (Campostoma anomalum) to measures of species diversity and turnover in stream fish assemblages among similarly sized watersheds across an agriculture-forest land-use gradient in the Little Miami River basin (Ohio, USA). Significant correlations were found in many, but not all, pair-wise comparisons. Allelic richness and species richness were strongly correlated, for example, but diversity measures based on allele frequencies and assemblage structure were not. In-stream conditions related to agricultural land use were identified as significant predictors of genetic diversity and species diversity. Comparisons to population size indicate, however, that genetic diversity and species diversity are not necessarily independent and that variation also corresponds to watershed location and glaciation history in the drainage basin. Our findings demonstrate that genetic diversity and species diversity can covary in stream fish assemblages, and illustrate the potential importance of scaling observations to capture responses to hierarchical environmental variation. More comparisons according to life history variation could further improve understanding of conditions that give rise to parallel variation in genetic diversity and species diversity, which in turn could improve diagnosis of anthropogenic influences on aquatic ecosystems.
Genetic diversity and species diversity of stream fishes covary across a land-use gradient
Blum, M.J.; Bagley, M.J.; Walters, D.M.; Jackson, S.A.; Daniel, F.B.; Chaloud, D.J.; Cade, B.S.
2012-01-01
Genetic diversity and species diversity are expected to covary according to area and isolation, but may not always covary with environmental heterogeneity. In this study, we examined how patterns of genetic and species diversity in stream fishes correspond to local and regional environmental conditions. To do so, we compared population size, genetic diversity and divergence in central stonerollers (Campostoma anomalum) to measures of species diversity and turnover in stream fish assemblages among similarly sized watersheds across an agriculture-forest land-use gradient in the Little Miami River basin (Ohio, USA). Significant correlations were found in many, but not all, pair-wise comparisons. Allelic richness and species richness were strongly correlated, for example, but diversity measures based on allele frequencies and assemblage structure were not. In-stream conditions related to agricultural land use were identified as significant predictors of genetic diversity and species diversity. Comparisons to population size indicate, however, that genetic diversity and species diversity are not necessarily independent and that variation also corresponds to watershed location and glaciation history in the drainage basin. Our findings demonstrate that genetic diversity and species diversity can covary in stream fish assemblages, and illustrate the potential importance of scaling observations to capture responses to hierarchical environmental variation. More comparisons according to life history variation could further improve understanding of conditions that give rise to parallel variation in genetic diversity and species diversity, which in turn could improve diagnosis of anthropogenic influences on aquatic ecosystems. ?? 2011 Springer-Verlag.
A Scaling Study by Pair-Wise Comparison Method: Friend Choosing in Adolescents
ERIC Educational Resources Information Center
Özmercan, Esra Eminoglu; Kumandas, Hatice
2016-01-01
This study aims to identify the perception levels of characteristics considered important to choose friends by adolescents from secondary education and to scale them with pair-wise comparison judgements. In this respect, this study was conducted with 100 10th grade students from a state vocational high school located in Marmara region in Turkey.…
Hyperbolic Cosine Latent Trait Models for Unfolding Direct Responses and Pairwise Preferences.
ERIC Educational Resources Information Center
Andrich, David
1995-01-01
The hyperbolic cosine unfolding model for direct responses (HCMDR) of persons to individual stimuli is elaborated in three ways. The specialization of the second parameter is shown to be a property of the data, and not arbitrary. The HCMDR is used to construct an elegant model for pairwise preferences. (SLD)
Godoy, Oscar; Stouffer, Daniel B; Kraft, Nathan J B; Levine, Jonathan M
2017-02-27
Intransitive competition is often projected to be a widespread mechanism of species coexistence in ecological communities. However, it is unknown how much of the coexistence we observe in nature results from this mechanism when species interactions are also stabilized by pairwise niche differences. We combined field-parameterized models of competition among 18 annual plant species with tools from network theory to quantify the prevalence of intransitive competitive relationships. We then analyzed the predicted outcome of competitive interactions with and without pairwise niche differences. Intransitive competition was found for just 15 to 19% of the 816 possible triplets, and this mechanism was never sufficient to stabilize the coexistence of the triplet when the pair-wise niche differences between competitors were removed. Of the transitive and intransitive triplets, only four were predicted to coexist and these were more similar in multidimensional trait space defined by 11 functional traits than non-coexisting triplets. Our results argue that intransitive competition may be less frequent than recently posed, and that even when it does operate, pairwise niche differences may be key to possible coexistence. This article is protected by copyright. All rights reserved.
Oğul, Hasan; Mumcuoğlu, Erkan U
2006-08-01
A new method based on probabilistic suffix trees (PSTs) is defined for pairwise comparison of distantly related protein sequences. The new definition is adopted in a discriminative framework for protein classification using pairwise sequence similarity scores in feature encoding. The framework uses support vector machines (SVMs) to separate structurally similar and dissimilar examples. The new discriminative system, which we call as SVM-PST, has been tested for SCOP family classification task, and compared with existing discriminative methods SVM-BLAST and SVM-Pairwise, which use BLAST similarity scores and dynamic-programming-based alignment scores, respectively. Results have shown that SVM-PST is more accurate than SVM-BLAST and competitive with SVM-Pairwise. In terms of computational efficiency, PST-based comparison is much better than dynamic-programming-based alignment. We also compared our results with the original family-based PST approach from which we were inspired. The present method provides a significantly better solution for protein classification in comparison with the family-based PST model.
From Markovian to pairwise epidemic models and the performance of moment closure approximations.
Taylor, Michael; Simon, Péter L; Green, Darren M; House, Thomas; Kiss, Istvan Z
2012-05-01
Many if not all models of disease transmission on networks can be linked to the exact state-based Markovian formulation. However the large number of equations for any system of realistic size limits their applicability to small populations. As a result, most modelling work relies on simulation and pairwise models. In this paper, for a simple SIS dynamics on an arbitrary network, we formalise the link between a well known pairwise model and the exact Markovian formulation. This involves the rigorous derivation of the exact ODE model at the level of pairs in terms of the expected number of pairs and triples. The exact system is then closed using two different closures, one well established and one that has been recently proposed. A new interpretation of both closures is presented, which explains several of their previously observed properties. The closed dynamical systems are solved numerically and the results are compared to output from individual-based stochastic simulations. This is done for a range of networks with the same average degree and clustering coefficient but generated using different algorithms. It is shown that the ability of the pairwise system to accurately model an epidemic is fundamentally dependent on the underlying large-scale network structure. We show that the existing pairwise models are a good fit for certain types of network but have to be used with caution as higher-order network structures may compromise their effectiveness.
Efficient pairwise RNA structure prediction and alignment using sequence alignment constraints
Dowell, Robin D; Eddy, Sean R
2006-01-01
Background We are interested in the problem of predicting secondary structure for small sets of homologous RNAs, by incorporating limited comparative sequence information into an RNA folding model. The Sankoff algorithm for simultaneous RNA folding and alignment is a basis for approaches to this problem. There are two open problems in applying a Sankoff algorithm: development of a good unified scoring system for alignment and folding and development of practical heuristics for dealing with the computational complexity of the algorithm. Results We use probabilistic models (pair stochastic context-free grammars, pairSCFGs) as a unifying framework for scoring pairwise alignment and folding. A constrained version of the pairSCFG structural alignment algorithm was developed which assumes knowledge of a few confidently aligned positions (pins). These pins are selected based on the posterior probabilities of a probabilistic pairwise sequence alignment. Conclusion Pairwise RNA structural alignment improves on structure prediction accuracy relative to single sequence folding. Constraining on alignment is a straightforward method of reducing the runtime and memory requirements of the algorithm. Five practical implementations of the pairwise Sankoff algorithm – this work (Consan), David Mathews' Dynalign, Ian Holmes' Stemloc, Ivo Hofacker's PMcomp, and Jan Gorodkin's FOLDALIGN – have comparable overall performance with different strengths and weaknesses. PMID:16952317
NASA Astrophysics Data System (ADS)
Zylberberg, Joel; Shea-Brown, Eric
2015-12-01
While recent recordings from neural populations show beyond-pairwise, or higher-order, correlations (HOC), we have little understanding of how HOC arise from network interactions and of how they impact encoded information. Here, we show that input nonlinearities imply HOC in spin-glass-type statistical models. We then discuss one such model with parametrized pairwise- and higher-order interactions, revealing conditions under which beyond-pairwise interactions increase the mutual information between a given stimulus type and the population responses. For jointly Gaussian stimuli, coding performance is improved by shaping output HOC only when neural firing rates are constrained to be low. For stimuli with skewed probability distributions (like natural image luminances), performance improves for all firing rates. Our work suggests surprising connections between nonlinear integration of neural inputs, stimulus statistics, and normative theories of population coding. Moreover, it suggests that the inclusion of beyond-pairwise interactions could improve the performance of Boltzmann machines for machine learning and signal processing applications.
Zylberberg, Joel; Shea-Brown, Eric
2015-12-01
While recent recordings from neural populations show beyond-pairwise, or higher-order, correlations (HOC), we have little understanding of how HOC arise from network interactions and of how they impact encoded information. Here, we show that input nonlinearities imply HOC in spin-glass-type statistical models. We then discuss one such model with parametrized pairwise- and higher-order interactions, revealing conditions under which beyond-pairwise interactions increase the mutual information between a given stimulus type and the population responses. For jointly Gaussian stimuli, coding performance is improved by shaping output HOC only when neural firing rates are constrained to be low. For stimuli with skewed probability distributions (like natural image luminances), performance improves for all firing rates. Our work suggests surprising connections between nonlinear integration of neural inputs, stimulus statistics, and normative theories of population coding. Moreover, it suggests that the inclusion of beyond-pairwise interactions could improve the performance of Boltzmann machines for machine learning and signal processing applications.
Marion, Zachary H; Fordyce, James A; Fitzpatrick, Benjamin M
2017-01-30
Beta diversity is an important metric in ecology quantifying differentiation or disparity in composition among communities, ecosystems, or phenotypes. To compare systems with different sizes (N, number of units within a system), beta diversity is often converted to related indices such as turnover or local/regional differentiation. Here we use simulations to demonstrate that these naive measures of dissimilarity depend on sample size and design. We show that when N is the number of sampled units (e.g., quadrats) rather than the "true" number of communities in the system (if such exists), these differentiation measures are biased estimators. We propose using average pairwise dissimilarity as an intuitive solution. That is, instead of attempting to estimate an N-community measure, we advocate estimating the expected dissimilarity between any random pairs of communities (or sampling units)-especially when the "true" N is unknown or undefined. Fortunately, measures of pairwise dissimilarity or overlap have been used in ecology for decades, and their properties are well known. Using the same simulations, we show that average pairwise metrics give consistent and unbiased estimates regardless of the number of survey units sampled. We advocate pairwise dissimilarity as a general standardization to ensure commensurability of different study systems. This article is protected by copyright. All rights reserved.
Non-pairwise additivity of the leading-order dispersion energy
Hollett, Joshua W.
2015-02-28
The leading-order (i.e., dipole-dipole) dispersion energy is calculated for one-dimensional (1D) and two-dimensional (2D) infinite lattices, and an infinite 1D array of infinitely long lines, of doubly occupied locally harmonic wells. The dispersion energy is decomposed into pairwise and non-pairwise additive components. By varying the force constant and separation of the wells, the non-pairwise additive contribution to the dispersion energy is shown to depend on the overlap of density between neighboring wells. As well separation is increased, the non-pairwise additivity of the dispersion energy decays. The different rates of decay for 1D and 2D lattices of wells is explained in terms of a Jacobian effect that influences the number of nearest neighbors. For an array of infinitely long lines of wells spaced 5 bohrs apart, and an inter-well spacing of 3 bohrs within a line, the non-pairwise additive component of the leading-order dispersion energy is −0.11 kJ mol{sup −1} well{sup −1}, which is 7% of the total. The polarizability of the wells and the density overlap between them are small in comparison to that of the atomic densities that arise from the molecular density partitioning used in post-density-functional theory (DFT) damped dispersion corrections, or DFT-D methods. Therefore, the nonadditivity of the leading-order dispersion observed here is a conservative estimate of that in molecular clusters.
Sparse estimation of a covariance matrix.
Bien, Jacob; Tibshirani, Robert J
2011-12-01
We suggest a method for estimating a covariance matrix on the basis of a sample of vectors drawn from a multivariate normal distribution. In particular, we penalize the likelihood with a lasso penalty on the entries of the covariance matrix. This penalty plays two important roles: it reduces the effective number of parameters, which is important even when the dimension of the vectors is smaller than the sample size since the number of parameters grows quadratically in the number of variables, and it produces an estimate which is sparse. In contrast to sparse inverse covariance estimation, our method's close relative, the sparsity attained here is in the covariance matrix itself rather than in the inverse matrix. Zeros in the covariance matrix correspond to marginal independencies; thus, our method performs model selection while providing a positive definite estimate of the covariance. The proposed penalized maximum likelihood problem is not convex, so we use a majorize-minimize approach in which we iteratively solve convex approximations to the original nonconvex problem. We discuss tuning parameter selection and demonstrate on a flow-cytometry dataset how our method produces an interpretable graphical display of the relationship between variables. We perform simulations that suggest that simple elementwise thresholding of the empirical covariance matrix is competitive with our method for identifying the sparsity structure. Additionally, we show how our method can be used to solve a previously studied special case in which a desired sparsity pattern is prespecified.
Concordance between criteria for covariate model building.
Hennig, Stefanie; Karlsson, Mats O
2014-04-01
When performing a population pharmacokinetic modelling analysis covariates are often added to the model. Such additions are often justified by improved goodness of fit and/or decreased in unexplained (random) parameter variability. Increased goodness of fit is most commonly measured by the decrease in the objective function value. Parameter variability can be defined as the sum of unexplained (random) and explained (predictable) variability. Increase in magnitude of explained parameter variability could be another possible criterion for judging improvement in the model. The agreement between these three criteria in diagnosing covariate-parameter relationships of different strengths and nature using stochastic simulations and estimations as well as assessing covariate-parameter relationships in four previously published real data examples were explored. Total estimated parameter variability was found to vary with the number of covariates introduced on the parameter. In the simulated examples and two real examples, the parameter variability increased with increasing number of included covariates. For the other real examples parameter variability decreased or did not change systematically with the addition of covariates. The three criteria were highly correlated, with the decrease in unexplained variability being more closely associated with changes in objective function values than increases in explained parameter variability were. The often used assumption that inclusion of covariates in models only shifts unexplained parameter variability to explained parameter variability appears not to be true, which may have implications for modelling decisions.
Social Capital: Does It Add to the Health Inequalities Debate?
ERIC Educational Resources Information Center
Chappell, Neena L.; Funk, Laura M.
2010-01-01
This paper empirically examines the relationship between advantage, social capital and health status to assess (a) whether social capital adds explanatory power to what we already know about the relationship between advantage and health and (b) whether social capital adds anything beyond its component parts, namely social participation and trust.…
Discovering Focus: Helping Students with ADD (Attention Deficit Disorder)
ERIC Educational Resources Information Center
Valkenburg, Jim
2012-01-01
Attention Deficit Disorder (ADD) is a neurological disorder which effects learning and that has a confusing set of diagnostic symptoms and an even more confusing set of remedies ranging from medication to meditation to nothing at all. Current neurological research suggests, however, that there are strategies that the individual with ADD can use to…
Dyslexia and ADD: 20 Questions Parents Ask. Children with Disabilities.
ERIC Educational Resources Information Center
Pickering, Joyce S.
2002-01-01
This article uses a question-answer format to present information for parents on dyslexia and attention deficit disorders (ADD). Information includes typical behaviors and skills of children with dyslexia or ADD, how parents can help their children, and the use of medication to control hyperactivity. (KB)
2015-01-01
Background A wealth of protein interaction data has become available in recent years, creating an urgent need for powerful analysis techniques. In this context, the problem of finding biologically meaningful correspondences between different protein-protein interaction networks (PPIN) is of particular interest. The PPIN of a species can be compared with that of other species through the process of PPIN alignment. Such an alignment can provide insight into basic problems like species evolution and network component function determination, as well as translational problems such as target identification and elucidation of mechanisms of disease spread. Furthermore, multiple PPINs can be aligned simultaneously, expanding the analytical implications of the result. While there are several pairwise network alignment algorithms, few methods are capable of multiple network alignment. Results We propose SMAL, a MNA algorithm based on the philosophy of scaffold-based alignment. SMAL is capable of converting results from any global pairwise alignment algorithms into a MNA in linear time. Using this method, we have built multiple network alignments based on combining pairwise alignments from a number of publicly available (pairwise) network aligners. We tested SMAL using PPINs of eight species derived from the IntAct repository and employed a number of measures to evaluate performance. Additionally, as part of our experimental investigations, we compared the effectiveness of SMAL while aligning up to eight input PPINs, and examined the effect of scaffold network choice on the alignments. Conclusions A key advantage of SMAL lies in its ability to create MNAs through the use of pairwise network aligners for which native MNA implementations do not exist. Experiments indicate that the performance of SMAL was comparable to that of the native MNA implementation of established methods such as IsoRankN and SMETANA. However, in terms of computational time, SMAL was significantly faster
Covariance Spectroscopy for Fissile Material Detection
Rusty Trainham, Jim Tinsley, Paul Hurley, Ray Keegan
2009-06-02
Nuclear fission produces multiple prompt neutrons and gammas at each fission event. The resulting daughter nuclei continue to emit delayed radiation as neutrons boil off, beta decay occurs, etc. All of the radiations are causally connected, and therefore correlated. The correlations are generally positive, but when different decay channels compete, so that some radiations tend to exclude others, negative correlations could also be observed. A similar problem of reduced complexity is that of cascades radiation, whereby a simple radioactive decay produces two or more correlated gamma rays at each decay. Covariance is the usual means for measuring correlation, and techniques of covariance mapping may be useful to produce distinct signatures of special nuclear materials (SNM). A covariance measurement can also be used to filter data streams because uncorrelated signals are largely rejected. The technique is generally more effective than a coincidence measurement. In this poster, we concentrate on cascades and the covariance filtering problem.
Using Incidence Sampling to Estimate Covariances.
ERIC Educational Resources Information Center
Knapp, Thomas R.
1979-01-01
This paper presents the generalized symmetric means approach to the estimation of population covariances, complete with derivations and examples. Particular attention is paid to the problem of missing data, which is handled very naturally in the incidence sampling framework. (CTM)
Covariation bias in panic-prone individuals.
Pauli, P; Montoya, P; Martz, G E
1996-11-01
Covariation estimates between fear-relevant (FR; emergency situations) or fear-irrelevant (FI; mushrooms and nudes) stimuli and an aversive outcome (electrical shock) were examined in 10 high-fear (panic-prone) and 10 low-fear respondents. When the relation between slide category and outcome was random (illusory correlation), only high-fear participants markedly overestimated the contingency between FR slides and shocks. However, when there was a high contingency of shocks following FR stimuli (83%) and a low contingency of shocks following FI stimuli (17%), the group difference vanished. Reversal of contingencies back to random induced a covariation bias for FR slides in high- and low-fear respondents. Results indicate that panic-prone respondents show a covariation bias for FR stimuli and that the experience of a high contingency between FR slides and aversive outcomes may foster such a covariation bias even in low-fear respondents.
Conformally covariant parametrizations for relativistic initial data
NASA Astrophysics Data System (ADS)
Delay, Erwann
2017-01-01
We revisit the Lichnerowicz-York method, and an alternative method of York, in order to obtain some conformally covariant systems. This type of parametrization is certainly more natural for non constant mean curvature initial data.
Combining biomarkers for classification with covariate adjustment.
Kim, Soyoung; Huang, Ying
2017-03-09
Combining multiple markers can improve classification accuracy compared with using a single marker. In practice, covariates associated with markers or disease outcome can affect the performance of a biomarker or biomarker combination in the population. The covariate-adjusted receiver operating characteristic (ROC) curve has been proposed as a tool to tease out the covariate effect in the evaluation of a single marker; this curve characterizes the classification accuracy solely because of the marker of interest. However, research on the effect of covariates on the performance of marker combinations and on how to adjust for the covariate effect when combining markers is still lacking. In this article, we examine the effect of covariates on classification performance of linear marker combinations and propose to adjust for covariates in combining markers by maximizing the nonparametric estimate of the area under the covariate-adjusted ROC curve. The proposed method provides a way to estimate the best linear biomarker combination that is robust to risk model assumptions underlying alternative regression-model-based methods. The proposed estimator is shown to be consistent and asymptotically normally distributed. We conduct simulations to evaluate the performance of our estimator in cohort and case/control designs and compare several different weighting strategies during estimation with respect to efficiency. Our estimator is also compared with alternative regression-model-based estimators or estimators that maximize the empirical area under the ROC curve, with respect to bias and efficiency. We apply the proposed method to a biomarker study from an human immunodeficiency virus vaccine trial. Copyright © 2017 John Wiley & Sons, Ltd.
Breeding curvature from extended gauge covariance
NASA Astrophysics Data System (ADS)
Aldrovandi, R.
1991-05-01
Independence between spacetime and “internal” space in gauge theories is related to the adjoint-covariant behaviour of the gauge potential. The usual gauge scheme is modified to allow a coupling between both spaces. Gauging spacetime translations produce field equations similar to Einstein equations. A curvature-like quantity of mixed differential-algebraic character emerges. Enlarged conservation laws are present, pointing to the presence of an covariance.
Covariate analysis of bivariate survival data
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.
Noncommutative Gauge Theory with Covariant Star Product
Zet, G.
2010-08-04
We present a noncommutative gauge theory with covariant star product on a space-time with torsion. In order to obtain the covariant star product one imposes some restrictions on the connection of the space-time. Then, a noncommutative gauge theory is developed applying this product to the case of differential forms. Some comments on the advantages of using a space-time with torsion to describe the gravitational field are also given.
Covariant action for type IIB supergravity
NASA Astrophysics Data System (ADS)
Sen, Ashoke
2016-07-01
Taking clues from the recent construction of the covariant action for type II and heterotic string field theories, we construct a manifestly Lorentz covariant action for type IIB supergravity, and discuss its gauge fixing maintaining manifest Lorentz invariance. The action contains a (non-gravitating) free 4-form field besides the usual fields of type IIB supergravity. This free field, being completely decoupled from the interacting sector, has no physical consequence.
Phase-covariant quantum cloning of qudits
Fan Heng; Imai, Hiroshi; Matsumoto, Keiji; Wang, Xiang-Bin
2003-02-01
We study the phase-covariant quantum cloning machine for qudits, i.e., the input states in a d-level quantum system have complex coefficients with arbitrary phase but constant module. A cloning unitary transformation is proposed. After optimizing the fidelity between input state and single qudit reduced density operator of output state, we obtain the optimal fidelity for 1 to 2 phase-covariant quantum cloning of qudits and the corresponding cloning transformation.
Lorentz covariance of loop quantum gravity
NASA Astrophysics Data System (ADS)
Rovelli, Carlo; Speziale, Simone
2011-05-01
The kinematics of loop gravity can be given a manifestly Lorentz-covariant formulation: the conventional SU(2)-spin-network Hilbert space can be mapped to a space K of SL(2,C) functions, where Lorentz covariance is manifest. K can be described in terms of a certain subset of the projected spin networks studied by Livine, Alexandrov and Dupuis. It is formed by SL(2,C) functions completely determined by their restriction on SU(2). These are square-integrable in the SU(2) scalar product, but not in the SL(2,C) one. Thus, SU(2)-spin-network states can be represented by Lorentz-covariant SL(2,C) functions, as two-component photons can be described in the Lorentz-covariant Gupta-Bleuler formalism. As shown by Wolfgang Wieland in a related paper, this manifestly Lorentz-covariant formulation can also be directly obtained from canonical quantization. We show that the spinfoam dynamics of loop quantum gravity is locally SL(2,C)-invariant in the bulk, and yields states that are precisely in K on the boundary. This clarifies how the SL(2,C) spinfoam formalism yields an SU(2) theory on the boundary. These structures define a tidy Lorentz-covariant formalism for loop gravity.
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
Pairwise entanglement and critical behavior of an anisotropic ferrimagnetic spin chain
NASA Astrophysics Data System (ADS)
Solano-Carrillo, E.; Franco, R.; Silva-Valencia, J.
2011-02-01
We studied the quantum phase transition revealed to occur in the ferrimagnetic mixed-spin (S,s)=(1,1/2) chain with positive crystal-field anisotropy D under an external magnetic field by using concepts from quantum information theory such as pairwise entanglement and purity, incorporated into the density matrix renormalization group algorithm. In this system, a magnetization plateau appears at 1/3 of the saturation magnetization for all values of D except for a critical value Dc where it vanishes. We obtained this value Dc=1.11445±0.00065 within the thermodynamic limit as local minima of the pairwise entanglement and the purity. Moreover, using this procedure we were able to investigate the second-order (or continuous) character of the quantum phase transition.
Pan, Dongbo; Lu, Xi; Liu, Juan; Deng, Yong
2014-01-01
Decision-making, as a way to discover the preference of ranking, has been used in various fields. However, owing to the uncertainty in group decision-making, how to rank alternatives by incomplete pairwise comparisons has become an open issue. In this paper, an improved method is proposed for ranking of alternatives by incomplete pairwise comparisons using Dempster-Shafer evidence theory and information entropy. Firstly, taking the probability assignment of the chosen preference into consideration, the comparison of alternatives to each group is addressed. Experiments verified that the information entropy of the data itself can determine the different weight of each group's choices objectively. Numerical examples in group decision-making environments are used to test the effectiveness of the proposed method. Moreover, the divergence of ranking mechanism is analyzed briefly in conclusion section. PMID:25250393
Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Leimkuhler, Benedict; Shang, Xiaocheng
2016-11-01
We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nosé-Hoover-Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for an important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees-Edwards boundary conditions to induce shear flow.
Pairwise Classifier Ensemble with Adaptive Sub-Classifiers for fMRI Pattern Analysis.
Kim, Eunwoo; Park, HyunWook
2017-02-01
The multi-voxel pattern analysis technique is applied to fMRI data for classification of high-level brain functions using pattern information distributed over multiple voxels. In this paper, we propose a classifier ensemble for multiclass classification in fMRI analysis, exploiting the fact that specific neighboring voxels can contain spatial pattern information. The proposed method converts the multiclass classification to a pairwise classifier ensemble, and each pairwise classifier consists of multiple sub-classifiers using an adaptive feature set for each class-pair. Simulated and real fMRI data were used to verify the proposed method. Intra- and inter-subject analyses were performed to compare the proposed method with several well-known classifiers, including single and ensemble classifiers. The comparison results showed that the proposed method can be generally applied to multiclass classification in both simulations and real fMRI analyses.
NgsRelate: a software tool for estimating pairwise relatedness from next-generation sequencing data
Korneliussen, Thorfinn Sand; Moltke, Ida
2015-01-01
Motivation: Pairwise relatedness estimation is important in many contexts such as disease mapping and population genetics. However, all existing estimation methods are based on called genotypes, which is not ideal for next-generation sequencing (NGS) data of low depth from which genotypes cannot be called with high certainty. Results: We present a software tool, NgsRelate, for estimating pairwise relatedness from NGS data. It provides maximum likelihood estimates that are based on genotype likelihoods instead of genotypes and thereby takes the inherent uncertainty of the genotypes into account. Using both simulated and real data, we show that NgsRelate provides markedly better estimates for low-depth NGS data than two state-of-the-art genotype-based methods. Availability: NgsRelate is implemented in C++ and is available under the GNU license at www.popgen.dk/software. Contact: ida@binf.ku.dk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26323718
Covariance Modifications to Subspace Bases
Harris, D B
2008-11-19
Adaptive signal processing algorithms that rely upon representations of signal and noise subspaces often require updates to those representations when new data become available. Subspace representations frequently are estimated from available data with singular value (SVD) decompositions. Subspace updates require modifications to these decompositions. Updates can be performed inexpensively provided they are low-rank. A substantial literature on SVD updates exists, frequently focusing on rank-1 updates (see e.g. [Karasalo, 1986; Comon and Golub, 1990, Badeau, 2004]). In these methods, data matrices are modified by addition or deletion of a row or column, or data covariance matrices are modified by addition of the outer product of a new vector. A recent paper by Brand [2006] provides a general and efficient method for arbitrary rank updates to an SVD. The purpose of this note is to describe a closely-related method for applications where right singular vectors are not required. This note also describes the SVD updates to a particular scenario of interest in seismic array signal processing. The particular application involve updating the wideband subspace representation used in seismic subspace detectors [Harris, 2006]. These subspace detectors generalize waveform correlation algorithms to detect signals that lie in a subspace of waveforms of dimension d {ge} 1. They potentially are of interest because they extend the range of waveform variation over which these sensitive detectors apply. Subspace detectors operate by projecting waveform data from a detection window into a subspace specified by a collection of orthonormal waveform basis vectors (referred to as the template). Subspace templates are constructed from a suite of normalized, aligned master event waveforms that may be acquired by a single sensor, a three-component sensor, an array of such sensors or a sensor network. The template design process entails constructing a data matrix whose columns contain the
Accounting for pairwise distance restraints in FFT-based protein-protein docking.
Xia, Bing; Vajda, Sandor; Kozakov, Dima
2016-11-01
ClusPro is a heavily used protein-protein docking server based on the fast Fourier transform (FFT) correlation approach. While FFT enables global docking, accounting for pairwise distance restraints using penalty terms in the scoring function is computationally expensive. We use a different approach and directly select low energy solutions that also satisfy the given restraints. As expected, accounting for restraints generally improves the rank of near native predictions, while retaining or even improving the numerical efficiency of FFT based docking.
NASA Technical Reports Server (NTRS)
Carreno, Victor A.
2015-01-01
Pair-wise Trajectory Management (PTM) is a cockpit based delegated responsibility separation standard. When an air traffic service provider gives a PTM clearance to an aircraft and the flight crew accepts the clearance, the flight crew will maintain spacing and separation from a designated aircraft. A PTM along track algorithm will receive state information from the designated aircraft and from the own ship to produce speed guidance for the flight crew to maintain spacing and separation
Bao, Yiming; Chetvernin, Vyacheslav; Tatusova, Tatiana
2014-12-01
The number of viral genome sequences in the public databases is increasing dramatically, and these sequences are playing an important role in virus classification. Pairwise sequence comparison is a sequence-based virus classification method. A program using this method calculates the pairwise identities of virus sequences within a virus family and displays their distribution, and visual analysis helps to determine demarcations at different taxonomic levels such as strain, species, genus and subfamily. Subsequent comparison of new sequences against existing ones allows viruses from which the new sequences were derived to be classified. Although this method cannot be used as the only criterion for virus classification in some cases, it is a quantitative method and has many advantages over conventional virus classification methods. It has been applied to several virus families, and there is an increasing interest in using this method for other virus families/groups. The Pairwise Sequence Comparison (PASC) classification tool was created at the National Center for Biotechnology Information. The tool's database stores pairwise identities for complete genomes/segments of 56 virus families/groups. Data in the system are updated every day to reflect changes in virus taxonomy and additions of new virus sequences to the public database. The web interface of the tool ( http://www.ncbi.nlm.nih.gov/sutils/pasc/ ) makes it easy to navigate and perform analyses. Multiple new viral genome sequences can be tested simultaneously with this system to suggest the taxonomic position of virus isolates in a specific family. PASC eliminates potential discrepancies in the results caused by different algorithms and/or different data used by researchers.
A Simple Mechanism for Beyond-Pairwise Correlations in Integrate-and-Fire Neurons.
Leen, David A; Shea-Brown, Eric
2015-12-01
The collective dynamics of neural populations are often characterized in terms of correlations in the spike activity of different neurons. We have developed an understanding of the circuit mechanisms that lead to correlations among cell pairs, but little is known about what determines the population firing statistics among larger groups of cells. Here, we examine this question for a simple, but ubiquitous, circuit feature: common fluctuating input arriving to spiking neurons of integrate-and-fire type. We show that this leads to strong beyond-pairwise correlations-that is, correlations that cannot be captured by maximum entropy models that extrapolate from pairwise statistics-as for earlier work with discrete threshold crossing (dichotomous Gaussian) models. Moreover, we find that the same is true for another widely used, doubly stochastic model of neural spiking, the linear-nonlinear cascade. We demonstrate the strong connection between the collective dynamics produced by integrate-and-fire and dichotomous Gaussian models, and show that the latter is a surprisingly accurate model of the former. Our conclusion is that beyond-pairwise correlations can be both broadly expected and possible to describe by simplified (and tractable) statistical models.
Morimoto, Chie; Manabe, Sho; Kawaguchi, Takahisa; Kawai, Chihiro; Fujimoto, Shuntaro; Hamano, Yuya; Yamada, Ryo; Matsuda, Fumihiko; Tamaki, Keiji
2016-01-01
We developed a new approach for pairwise kinship analysis in forensic genetics based on chromosomal sharing between two individuals. Here, we defined "index of chromosome sharing" (ICS) calculated using 174,254 single nucleotide polymorphism (SNP) loci typed by SNP microarray and genetic length of the shared segments from the genotypes of two individuals. To investigate the expected ICS distributions from first- to fifth-degree relatives and unrelated pairs, we used computationally generated genotypes to consider the effect of linkage disequilibrium and recombination. The distributions were used for probabilistic evaluation of the pairwise kinship analysis, such as likelihood ratio (LR) or posterior probability, without allele frequencies and haplotype frequencies. Using our method, all actual sample pairs from volunteers showed significantly high LR values (i.e., ≥ 108); therefore, we can distinguish distant relationships (up to the fifth-degree) from unrelated pairs based on LR. Moreover, we can determine accurate degrees of kinship in up to third-degree relationships with a probability of > 80% using the criterion of posterior probability ≥ 0.90, even if the kinship of the pair is totally unpredictable. This approach greatly improves pairwise kinship analysis of distant relationships, specifically in cases involving identification of disaster victims or missing persons.
Contamination vs. harm-relevant outcome expectancies and covariation bias in spider phobia.
de Jong, Peter J; Peters, Madelon L
2007-06-01
There is increasing evidence that spiders are not feared because of harmful outcome expectancies but because of disgust and contamination-relevant outcome expectancies. This study investigated the relative strength of contamination- and harm-relevant UCS expectancies and covariation bias in spider phobia. High (n=25) and low (n=24) spider fearful individuals saw a series of slides comprising spiders, pitbulls, maggots, and rabbits. Slides were randomly paired with either a harm-relevant outcome (electrical shock), a contamination-related outcome (drinking of a distasting fluid), or nothing. Spider fearful individuals displayed a contamination-relevant UCS expectancy bias associated with spiders, whereas controls displayed a harm-relevant expectancy bias. There was no evidence for a (differential) postexperimental covariation bias; thus the biased expectancies were not robust against refutation. The present findings add to the evidence that contamination ideation is critically involved in spider phobia.
Convex Banding of the Covariance Matrix.
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.
A sparse Ising model with covariates.
Cheng, Jie; Levina, Elizaveta; Wang, Pei; Zhu, Ji
2014-12-01
There has been a lot of work fitting Ising models to multivariate binary data in order to understand the conditional dependency relationships between the variables. However, additional covariates are frequently recorded together with the binary data, and may influence the dependence relationships. Motivated by such a dataset on genomic instability collected from tumor samples of several types, we propose a sparse covariate dependent Ising model to study both the conditional dependency within the binary data and its relationship with the additional covariates. This results in subject-specific Ising models, where the subject's covariates influence the strength of association between the genes. As in all exploratory data analysis, interpretability of results is important, and we use ℓ1 penalties to induce sparsity in the fitted graphs and in the number of selected covariates. Two algorithms to fit the model are proposed and compared on a set of simulated data, and asymptotic results are established. The results on the tumor dataset and their biological significance are discussed in detail.
A Nonparametric Prior for Simultaneous Covariance Estimation.
Gaskins, Jeremy T; Daniels, Michael J
2013-01-01
In the modeling of longitudinal data from several groups, appropriate handling of the dependence structure is of central importance. Standard methods include specifying a single covariance matrix for all groups or independently estimating the covariance matrix for each group without regard to the others, but when these model assumptions are incorrect, these techniques can lead to biased mean effects or loss of efficiency, respectively. Thus, it is desirable to develop methods to simultaneously estimate the covariance matrix for each group that will borrow strength across groups in a way that is ultimately informed by the data. In addition, for several groups with covariance matrices of even medium dimension, it is difficult to manually select a single best parametric model among the huge number of possibilities given by incorporating structural zeros and/or commonality of individual parameters across groups. In this paper we develop a family of nonparametric priors using the matrix stick-breaking process of Dunson et al. (2008) that seeks to accomplish this task by parameterizing the covariance matrices in terms of the parameters of their modified Cholesky decomposition (Pourahmadi, 1999). We establish some theoretic properties of these priors, examine their effectiveness via a simulation study, and illustrate the priors using data from a longitudinal clinical trial.
'Pokemon Go' Players Add 2,000 Steps a Day
... Pokemon Go' Players Add 2,000 Steps a Day Smartphone game benefits overweight, sedentary people most, researchers ... as likely to walk 10,000 steps a day than they were before taking up the game, ...
Top 5 Ways to Help Students with ADD/ADHD
ERIC Educational Resources Information Center
Johnson, Kathy
2011-01-01
This article suggests five ways to help students with ADD/ADHD. These are: (1) Integrate the primitive reflexes; (2) Diet; (3) Visual attention; (4) Help for auditory attention; and (5) Cognitive training.
TDRS-K to Add to Vital Space Network
NASA officials discuss the launch of the TDRS-K spacecraft to add to the space network that enables communications between the International Space Station and Earth-orbiting satellites and ground c...
Upper and lower covariance bounds for perturbed linear systems
NASA Technical Reports Server (NTRS)
Xu, J.-H.; Skelton, R. E.; Zhu, G.
1990-01-01
Both upper and lower bounds are established for state covariance matrices under parameter perturbations of the plant. The motivation for this study lies in the fact that many robustness properties of linear systems are given explicitly in terms of the state covariance matrix. Moreover, there exists a theory for control by covariance assignment. The results provide robustness properties of these covariance controllers.
Yuan, Ke-Hai; Hayashi, Kentaro; Yanagihara, Hirokazu
2007-01-01
Model evaluation in covariance structure analysis is critical before the results can be trusted. Due to finite sample sizes and unknown distributions of real data, existing conclusions regarding a particular statistic may not be applicable in practice. The bootstrap procedure automatically takes care of the unknown distribution and, for a given sample size, also provides more accurate results than those based on standard asymptotics. But the procedure needs a matrix to play the role of the population covariance matrix. The closer the matrix is to the true population covariance matrix, the more valid the bootstrap inference is. The current paper proposes a class of covariance matrices by combining theory and data. Thus, a proper matrix from this class is closer to the true population covariance matrix than those constructed by any existing methods. Each of the covariance matrices is easy to generate and also satisfies several desired properties. An example with nine cognitive variables and a confirmatory factor model illustrates the details for creating population covariance matrices with different misspecifications. When evaluating the substantive model, bootstrap or simulation procedures based on these matrices will lead to more accurate conclusion than that based on artificial covariance matrices.
Progress on Nuclear Data Covariances: AFCI-1.2 Covariance Library
Oblozinsky,P.; Oblozinsky,P.; Mattoon,C.M.; Herman,M.; Mughabghab,S.F.; Pigni,M.T.; Talou,P.; Hale,G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G
2009-09-28
Improved neutron cross section covariances were produced for 110 materials including 12 light nuclei (coolants and moderators), 78 structural materials and fission products, and 20 actinides. Improved covariances were organized into AFCI-1.2 covariance library in 33-energy groups, from 10{sup -5} eV to 19.6 MeV. BNL contributed improved covariance data for the following materials: {sup 23}Na and {sup 55}Mn where more detailed evaluation was done; improvements in major structural materials {sup 52}Cr, {sup 56}Fe and {sup 58}Ni; improved estimates for remaining structural materials and fission products; improved covariances for 14 minor actinides, and estimates of mubar covariances for {sup 23}Na and {sup 56}Fe. LANL contributed improved covariance data for {sup 235}U and {sup 239}Pu including prompt neutron fission spectra and completely new evaluation for {sup 240}Pu. New R-matrix evaluation for {sup 16}O including mubar covariances is under completion. BNL assembled the library and performed basic testing using improved procedures including inspection of uncertainty and correlation plots for each material. The AFCI-1.2 library was released to ANL and INL in August 2009.
ERIC Educational Resources Information Center
Zeytun, Aysel Sen; Cetinkaya, Bulent; Erbas, Ayhan Kursat
2010-01-01
Various studies suggest that covariational reasoning plays an important role on understanding the fundamental ideas of calculus and modeling dynamic functional events. The purpose of this study was to investigate a group of mathematics teachers' covariational reasoning abilities and predictions about their students. Data were collected through…
NASA Astrophysics Data System (ADS)
Hui, Yi; Law, Siu Seong; Ku, Chiu Jen
2017-02-01
Covariance of the auto/cross-covariance matrix based method is studied for the damage identification of a structure with illustrations on its advantages and limitations. The original method is extended for structures under direct white noise excitations. The auto/cross-covariance function of the measured acceleration and its corresponding derivatives are formulated analytically, and the method is modified in two new strategies to enable successful identification with much fewer sensors. Numerical examples are adopted to illustrate the improved method, and the effects of sampling frequency and sampling duration are discussed. Results show that the covariance of covariance calculated from responses of higher order modes of a structure play an important role to the accurate identification of local damage in a structure.
Incorporating covariates in skewed functional data models.
Li, Meng; Staicu, Ana-Maria; Bondell, Howard D
2015-07-01
We introduce a class of covariate-adjusted skewed functional models (cSFM) designed for functional data exhibiting location-dependent marginal distributions. We propose a semi-parametric copula model for the pointwise marginal distributions, which are allowed to depend on covariates, and the functional dependence, which is assumed covariate invariant. The proposed cSFM framework provides a unifying platform for pointwise quantile estimation and trajectory prediction. We consider a computationally feasible procedure that handles densely as well as sparsely observed functional data. The methods are examined numerically using simulations and is applied to a new tractography study of multiple sclerosis. Furthermore, the methodology is implemented in the R package cSFM, which is publicly available on CRAN.
FAST NEUTRON COVARIANCES FOR EVALUATED DATA FILES.
HERMAN, M.; OBLOZINSKY, P.; ROCHMAN, D.; KAWANO, T.; LEAL, L.
2006-06-05
We describe implementation of the KALMAN code in the EMPIRE system and present first covariance data generated for Gd and Ir isotopes. A complete set of covariances, in the full energy range, was produced for the chain of 8 Gadolinium isotopes for total, elastic, capture, total inelastic (MT=4), (n,2n), (n,p) and (n,alpha) reactions. Our correlation matrices, based on combination of model calculations and experimental data, are characterized by positive mid-range and negative long-range correlations. They differ from the model-generated covariances that tend to show strong positive long-range correlations and those determined solely from experimental data that result in nearly diagonal matrices. We have studied shapes of correlation matrices obtained in the calculations and interpreted them in terms of the underlying reaction models. An important result of this study is the prediction of narrow energy ranges with extremely small uncertainties for certain reactions (e.g., total and elastic).
Covariance based outlier detection with feature selection.
Zwilling, Chris E; Wang, Michelle Y
2016-08-01
The present covariance based outlier detection algorithm selects from a candidate set of feature vectors that are best at identifying outliers. Features extracted from biomedical and health informatics data can be more informative in disease assessment and there are no restrictions on the nature and number of features that can be tested. But an important challenge for an algorithm operating on a set of features is for it to winnow the effective features from the ineffective ones. The powerful algorithm described in this paper leverages covariance information from the time series data to identify features with the highest sensitivity for outlier identification. Empirical results demonstrate the efficacy of the method.
Sparse Covariance Matrix Estimation With Eigenvalue Constraints.
Liu, Han; Wang, Lie; Zhao, Tuo
2014-04-01
We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online.
Parametric number covariance in quantum chaotic spectra.
Vinayak; Kumar, Sandeep; Pandey, Akhilesh
2016-03-01
We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.
Covariance Analysis of Gamma Ray Spectra
Trainham, R.; Tinsley, J.
2013-01-01
The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.
Covariance analysis of gamma ray spectra
Trainham, R.; Tinsley, J.
2013-01-15
The covariance method exploits fluctuations in signals to recover information encoded in correlations which are usually lost when signal averaging occurs. In nuclear spectroscopy it can be regarded as a generalization of the coincidence technique. The method can be used to extract signal from uncorrelated noise, to separate overlapping spectral peaks, to identify escape peaks, to reconstruct spectra from Compton continua, and to generate secondary spectral fingerprints. We discuss a few statistical considerations of the covariance method and present experimental examples of its use in gamma spectroscopy.
Pair-wise multicomparison and OPLS analyses of cold-acclimation phases in Siberian spruce.
Shiryaeva, Liudmila; Antti, Henrik; Schröder, Wolfgang P; Strimbeck, Richard; Shiriaev, Anton S
2012-06-01
Analysis of metabolomics data often goes beyond the task of discovering biomarkers and can be aimed at recovering other important characteristics of observed metabolomic changes. In this paper we explore different methods to detect the presence of distinctive phases in seasonal-responsive changes of metabolomic patterns of Siberian spruce (Picea obovata) during cold acclimation occurred in the period from mid-August to January. Multivariate analysis, specifically orthogonal projection to latent structures discriminant analysis (OPLS-DA), identified time points where the metabolomic patterns underwent substantial modifications as a whole, revealing four distinctive phases during acclimation. This conclusion was re-examined by a univariate analysis consisting of multiple pair-wise comparisons to identify homogeneity intervals for each metabolite. These tests complemented OPLS-DA, clarifying biological interpretation of the classification: about 60% of metabolites found responsive to the cold stress indeed changed at one or more of the time points predicted by OPLS-DA. However, the univariate approach did not support the proposed division of the acclimation period into four phases: less than 10% of metabolites altered during the acclimation had homogeneous levels predicted by OPLS-DA. This demonstrates that coupling the classification found by OPLS-DA and the analysis of dynamics of individual metabolites obtained by pair-wise multicomparisons reveals a more correct characterization of biochemical processes in freezing tolerant trees and leads to interpretations that cannot be deduced by either method alone. The combined analysis can be used in other 'omics'-studies, where response factors have a causal dependence (like the time in the present work) and pair-wise multicomparisons are not conservative. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s11306-011-0304-5) contains supplementary material, which is available to authorized
Pairwise contact energy statistical potentials can help to find probability of point mutations.
Saravanan, K M; Suvaithenamudhan, S; Parthasarathy, S; Selvaraj, S
2017-01-01
To adopt a particular fold, a protein requires several interactions between its amino acid residues. The energetic contribution of these residue-residue interactions can be approximated by extracting statistical potentials from known high resolution structures. Several methods based on statistical potentials extracted from unrelated proteins are found to make a better prediction of probability of point mutations. We postulate that the statistical potentials extracted from known structures of similar folds with varying sequence identity can be a powerful tool to examine probability of point mutation. By keeping this in mind, we have derived pairwise residue and atomic contact energy potentials for the different functional families that adopt the (α/β)8 TIM-Barrel fold. We carried out computational point mutations at various conserved residue positions in yeast Triose phosphate isomerase enzyme for which experimental results are already reported. We have also performed molecular dynamics simulations on a subset of point mutants to make a comparative study. The difference in pairwise residue and atomic contact energy of wildtype and various point mutations reveals probability of mutations at a particular position. Interestingly, we found that our computational prediction agrees with the experimental studies of Silverman et al. (Proc Natl Acad Sci 2001;98:3092-3097) and perform better prediction than iMutant and Cologne University Protein Stability Analysis Tool. The present work thus suggests deriving pairwise contact energy potentials and molecular dynamics simulations of functionally important folds could help us to predict probability of point mutations which may ultimately reduce the time and cost of mutation experiments. Proteins 2016; 85:54-64. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Daoud, M.; Ahl Laamara, R.
2012-07-01
We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl-Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger-Horne-Zeilinger states.
Improving pairwise sequence alignment accuracy using near-optimal protein sequence alignments
2010-01-01
Background While the pairwise alignments produced by sequence similarity searches are a powerful tool for identifying homologous proteins - proteins that share a common ancestor and a similar structure; pairwise sequence alignments often fail to represent accurately the structural alignments inferred from three-dimensional coordinates. Since sequence alignment algorithms produce optimal alignments, the best structural alignments must reflect suboptimal sequence alignment scores. Thus, we have examined a range of suboptimal sequence alignments and a range of scoring parameters to understand better which sequence alignments are likely to be more structurally accurate. Results We compared near-optimal protein sequence alignments produced by the Zuker algorithm and a set of probabilistic alignments produced by the probA program with structural alignments produced by four different structure alignment algorithms. There is significant overlap between the solution spaces of structural alignments and both the near-optimal sequence alignments produced by commonly used scoring parameters for sequences that share significant sequence similarity (E-values < 10-5) and the ensemble of probA alignments. We constructed a logistic regression model incorporating three input variables derived from sets of near-optimal alignments: robustness, edge frequency, and maximum bits-per-position. A ROC analysis shows that this model more accurately classifies amino acid pairs (edges in the alignment path graph) according to the likelihood of appearance in structural alignments than the robustness score alone. We investigated various trimming protocols for removing incorrect edges from the optimal sequence alignment; the most effective protocol is to remove matches from the semi-global optimal alignment that are outside the boundaries of the local alignment, although trimming according to the model-generated probabilities achieves a similar level of improvement. The model can also be used to
Pikhitsa, Stanislaw
2017-01-01
We provide a complete classification of possible configurations of mutually pairwise-touching infinite cylinders in Euclidian three-dimensional space. It turns out that there is a maximum number of such cylinders possible in three dimensions independently of the shape of the cylinder cross-sections. We give the explanation of the uniqueness of the non-trivial configuration of seven equal mutually touching round infinite cylinders found earlier. Some results obtained for the chirality matrix, which is equivalent to the Seidel adjacency matrix, may be found useful for the theory of graphs. PMID:28280575
NASA Astrophysics Data System (ADS)
Li, Yujie; Dai, Yue; Shi, Yu
2017-02-01
Quantum entanglement is the characteristic quantum correlation. Here, we use this concept to analyze the quantum entanglement generated by Schwinger production of particle-antiparticle pairs in an electric field, as well as the change of entanglement as a consequence of the electric field effect on a pre-existing entangled pair of particles. The system is partitioned by using momentum modes. Various kinds of pairwise mode entanglement are calculated as functions of the electric field. Both constant and pulsed electric fields are considered. The use of entanglement exposes information beyond that in particle number distributions.
Observed Score Linear Equating with Covariates
ERIC Educational Resources Information Center
Branberg, Kenny; Wiberg, Marie
2011-01-01
This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…
Economical phase-covariant cloning of qudits
Buscemi, Francesco; D'Ariano, Giacomo Mauro; Macchiavello, Chiara
2005-04-01
We derive the optimal N{yields}M phase-covariant quantum cloning for equatorial states in dimension d with M=kd+N, k integer. The cloning maps are optimal for both global and single-qudit fidelity. The map is achieved by an 'economical' cloning machine, which works without ancilla.
A covariance NMR toolbox for MATLAB and OCTAVE.
Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David
2011-03-01
The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE.
Parasail: SIMD C library for global, semi-global, and local pairwise sequence alignments
Daily, Jeffrey A.
2016-02-10
Sequence alignment algorithms are a key component of many bioinformatics applications. Though various fast Smith-Waterman local sequence alignment implementations have been developed for x86 CPUs, most are embedded into larger database search tools. In addition, fast implementations of Needleman-Wunsch global sequence alignment and its semi-global variants are not as widespread. This article presents the first software library for local, global, and semi-global pairwise intra-sequence alignments and improves the performance of previous intra-sequence implementations. As a result, a faster intra-sequence pairwise alignment implementation is described and benchmarked. Using a 375 residue query sequence a speed of 136 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon E5-2670 12-core processor system, the highest reported for an implementation based on Farrar’s ’striped’ approach. When using only a single thread, parasail was 1.7 times faster than Rognes’s SWIPE. For many score matrices, parasail is faster than BLAST. The software library is designed for 64 bit Linux, OS X, or Windows on processors with SSE2, SSE41, or AVX2. Source code is available from https://github.com/jeffdaily/parasail under the Battelle BSD-style license. In conclusion, applications that require optimal alignment scores could benefit from the improved performance. For the first time, SIMD global, semi-global, and local alignments are available in a stand-alone C library.
COLLECTIVE PAIRWISE CLASSIFICATION FOR MULTI-WAY ANALYSIS OF DISEASE AND DRUG DATA
ZITNIK, MARINKA; ZUPAN, BLAZ
2015-01-01
Interactions between drugs, drug targets or diseases can be predicted on the basis of molecular, clinical and genomic features by, for example, exploiting similarity of disease pathways, chemical structures, activities across cell lines or clinical manifestations of diseases. A successful way to better understand complex interactions in biomedical systems is to employ collective relational learning approaches that can jointly model diverse relationships present in multiplex data. We propose a novel collective pairwise classification approach for multi-way data analysis. Our model leverages the superiority of latent factor models and classifies relationships in a large relational data domain using a pairwise ranking loss. In contrast to current approaches, our method estimates probabilities, such that probabilities for existing relationships are higher than for assumed-to-be-negative relationships. Although our method bears correspondence with the maximization of non-differentiable area under the ROC curve, we were able to design a learning algorithm that scales well on multi-relational data encoding interactions between thousands of entities. We use the new method to infer relationships from multiplex drug data and to predict connections between clinical manifestations of diseases and their underlying molecular signatures. Our method achieves promising predictive performance when compared to state-of-the-art alternative approaches and can make “category-jumping” predictions about diseases from genomic and clinical data generated far outside the molecular context. PMID:26776175
A Pairwise Naïve Bayes Approach to Bayesian Classification
Betensky, Rebecca A.
2016-01-01
Despite the relatively high accuracy of the naïve Bayes (NB) classifier, there may be several instances where it is not optimal, i.e. does not have the same classification performance as the Bayes classifier utilizing the joint distribution of the examined attributes. However, the Bayes classifier can be computationally intractable due to its required knowledge of the joint distribution. Therefore, we introduce a “pairwise naïve” Bayes (PNB) classifier that incorporates all pairwise relationships among the examined attributes, but does not require specification of the joint distribution. In this paper, we first describe the necessary and sufficient conditions under which the PNB classifier is optimal. We then discuss sufficient conditions for which the PNB classifier, and not NB, is optimal for normal attributes. Through simulation and actual studies, we evaluate the performance of our proposed classifier relative to the Bayes and NB classifiers, along with the HNB, AODE, LBR and TAN classifiers, using normal density and empirical estimation methods. Our applications show that the PNB classifier using normal density estimation yields the highest accuracy for data sets containing continuous attributes. We conclude that it offers a useful compromise between the Bayes and NB classifiers. PMID:27087730
A water market simulator considering pair-wise trades between agents
NASA Astrophysics Data System (ADS)
Huskova, I.; Erfani, T.; Harou, J. J.
2012-04-01
In many basins in England no further water abstraction licences are available. Trading water between water rights holders has been recognized as a potentially effective and economically efficient strategy to mitigate increasing scarcity. A screening tool that could assess the potential for trade through realistic simulation of individual water rights holders would help assess the solution's potential contribution to local water management. We propose an optimisation-driven water market simulator that predicts pair-wise trade in a catchment and represents its interaction with natural hydrology and engineered infrastructure. A model is used to emulate licence-holders' willingness to engage in short-term trade transactions. In their simplest form agents are represented using an economic benefit function. The working hypothesis is that trading behaviour can be partially predicted based on differences in marginal values of water over space and time and estimates of transaction costs on pair-wise trades. We discuss the further possibility of embedding rules, norms and preferences of the different water user sectors to more realistically represent the behaviours, motives and constraints of individual licence holders. The potential benefits and limitations of such a social simulation (agent-based) approach is contrasted with our simulator where agents are driven by economic optimization. A case study based on the Dove River Basin (UK) demonstrates model inputs and outputs. The ability of the model to suggest impacts of water rights policy reforms on trading is discussed.
Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus
2016-01-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short. PMID:27004867
Experimental evidence rejects pairwise modelling approach to coexistence in plant communities
Dormann, Carsten F; Roxburgh, Stephen H
2005-01-01
Competition is often invoked as the cause of plant species loss with increasing system productivity. Experimental results for multispecies assemblages are virtually absent and mathematical models are thus used to explore the relationship between competition and coexistence. Modelling approaches to coexistence and diversity in competitive communities commonly employ Lotka–Volterra-type (LV) models with additive pairwise competitive effects. Using pairwise plant competition experiments, we calibrate the LV system and use it to predict plant biomass and coexistence in six three-species and one seven-species experimental mixture. Our results show that five out of the six three-species sets and the seven-species set deviate significantly from LV model predictions. Fitting an additional non-additive competition coefficient resulted in predictions that more closely matched the experimental results, with stable coexistence suggested in all but one case. These results are discussed with particular reference to the possible underlying mechanisms of coexistence in our experimental community. Modelling the effect of competition intensity on stability indicates that if non-additive effects occur, they will be relevant over a wide range of community sizes. Our findings caution against relying on coexistence predictions based on LV models. PMID:16024393
Pairwise and edge-based models of epidemic dynamics on correlated weighted networks
Rattana, P.; Miller, J.C.; Kiss, I.Z.
2014-01-01
In this paper we explore the potential of the pairwise-type modelling approach to be extended to weighted networks where nodal degree and weights are not independent. As a baseline or null model for weighted networks, we consider undirected, heterogenous networks where edge weights are randomly distributed. We show that the pairwise model successfully captures the extra complexity of the network, but does this at the cost of limited analytical tractability due the high number of equations. To circumvent this problem, we employ the edge-based modelling approach to derive models corresponding to two different cases, namely for degree-dependent and randomly distributed weights. These models are more amenable to compute important epidemic descriptors, such as early growth rate and final epidemic size, and produce similarly excellent agreement with simulation. Using a branching process approach we compute the basic reproductive ratio for both models and discuss the implication of random and correlated weight distributions on this as well as on the time evolution and final outcome of epidemics. Finally, we illustrate that the two seemingly different modelling approaches, pairwsie and edge-based, operate on similar assumptions and it is possible to formally link the two. PMID:25580064
Pairwise Interaction Extended Point Particle (PIEP) Model for a Random Array of Spheres
NASA Astrophysics Data System (ADS)
Akiki, Georges; Jackson, Thomas; Balachandar, Sivaramakrishnan; CenterCompressible Multiphase Turbulence Team
2016-11-01
This study investigates a flow past random array of spherical particles. The understanding of the governing forces within these arrays is crucial for obtaining accurate models used in particle-laden simulations. These models have to faithfully reflect the sub-grid interactions between the particles and the continuous phase. The models being used today assumes an average force on all particles within the array based on the mean volume fraction and Reynolds number. Here, we develop a model which can compute the drag and lateral forces on each particle by accounting for the precise location of few surrounding neighbors. A pairwise interaction is assumed where the perturbation flow induced by each neighbor is considered separately, then the effect of all neighbors are linearly superposed to obtain the total perturbation. Faxén correction is used to quantify the force perturbation due to the presence of the neighbors. The single neighbor perturbations are mapped in the vicinity of a reference sphere and stored as libraries. We test the Pairwise Interaction Extended Point-Particle (PIEP) model for random arrays at two different volume fractions of ϕ = 0 . 1 and 0.21 and Reynolds number in the range 16 <= Re <= 170 . The PIEP model predictions are compared against drag and lift forces obtained from fully-resolved DNS performed using immersed boundary method. We observe the PIEP model prediction to correlate much better with the DNS results than the classical mean drag model prediction.
Pairwise Operator Learning for Patch Based Single-image Super-resolution.
Tang, Yi; Shao, Ling
2016-12-14
Motivated by the fact that image patches could be inherently represented by matrices, single-image super-resolution is treated as a problem of learning regression operators in a matrix space in this paper. The regression operators that map low-resolution image patches to high-resolution image patches are generally defined by left and right multiplication operators. The pairwise operators are respectively used to extract the raw and column information of low-resolution image patches for recovering high-resolution estimations. The patch based regression algorithm possesses three favorable properties. Firstly, the proposed super-resolution algorithm is efficient during both training and testing, because image patches are treated as matrices. Secondly, the data storage requirement of the optimal pairwise operator is far less than most popular single-image super-resolution algorithms because only two small sized matrices need to be stored. Lastly, the super-resolution performance is competitive with most popular single-image super-resolution algorithms because both raw and column information of image patches is considered. Experimental results show the efficiency and effectiveness of the proposed patch-based single-image superresolution algorithm.
Crystal Structure of the lamda Repressor and a Model for Pairwise Cooperative Operator Binding
Stayrook,S.; Jaru-Ampornpan, P.; Ni, J.; Hochschild, A.; Lewis, M.
2008-01-01
Bacteriophage {lambda} has for many years been a model system for understanding mechanisms of gene regulation1. A 'genetic switch' enables the phage to transition from lysogenic growth to lytic development when triggered by specific environmental conditions. The key component of the switch is the cI repressor, which binds to two sets of three operator sites on the chromosome that are separated by about 2,400 base pairs (bp)2, 3. A hallmark of the system is the pairwise cooperativity of repressor binding4. In the absence of detailed structural information, it has been difficult to understand fully how repressor molecules establish the cooperativity complex. Here we present the X-ray crystal structure of the intact cI repressor dimer bound to a DNA operator site. The structure of the repressor, determined by multiple isomorphous replacement methods, reveals an unusual overall architecture that allows it to adopt a conformation that appears to facilitate pairwise cooperative binding to adjacent operator sites.
Evaluating the use of pairwise dissimilarity metrics in paleoanthropology.
Gordon, Adam D; Wood, Bernard
2013-10-01
Questions of alpha taxonomy are best addressed by comparing unknown specimens to samples of the taxa to which they might belong. However, analysis of the hominin fossil record is riddled with methods that claim to evaluate whether pairs of individual fossils belong to the same species. Two such methods, log sem and the related STET method, have been introduced and used in studies of fossil hominins. Both methods attempt to quantify morphological dissimilarity for a pair of fossils and then evaluate a null hypothesis of conspecificity using the assumption that pairs of fossils that fall beneath a predefined dissimilarity threshold are likely to belong to the same species, whereas pairs of fossils above that threshold are likely to belong to different species. In this contribution, we address (1) whether these particular methods do what they claim to do, and (2) whether such approaches can ever reliably address the question of conspecificity. We show that log sem and STET do not reliably measure deviations from shape similarity, and that values of these measures for any pair of fossils are highly dependent upon the number of variables compared. To address these issues we develop a measure of shape dissimilarity, the Standard Deviation of Logged Ratios (sLR). We suggest that while pairwise dissimilarity metrics that accurately measure deviations from isometry (e.g., sLR) may be useful for addressing some questions that relate to morphological variation, no pairwise method can reliably answer the question of whether two fossils are conspecific.
PIPER: an FFT-based protein docking program with pairwise potentials.
Kozakov, Dima; Brenke, Ryan; Comeau, Stephen R; Vajda, Sandor
2006-11-01
The Fast Fourier Transform (FFT) correlation approach to protein-protein docking can evaluate the energies of billions of docked conformations on a grid if the energy is described in the form of a correlation function. Here, this restriction is removed, and the approach is efficiently used with pairwise interaction potentials that substantially improve the docking results. The basic idea is approximating the interaction matrix by its eigenvectors corresponding to the few dominant eigenvalues, resulting in an energy expression written as the sum of a few correlation functions, and solving the problem by repeated FFT calculations. In addition to describing how the method is implemented, we present a novel class of structure-based pairwise intermolecular potentials. The DARS (Decoys As the Reference State) potentials are extracted from structures of protein-protein complexes and use large sets of docked conformations as decoys to derive atom pair distributions in the reference state. The current version of the DARS potential works well for enzyme-inhibitor complexes. With the new FFT-based program, DARS provides much better docking results than the earlier approaches, in many cases generating 50% more near-native docked conformations. Although the potential is far from optimal for antibody-antigen pairs, the results are still slightly better than those given by an earlier FFT method. The docking program PIPER is freely available for noncommercial applications.
Statistical properties of pairwise distances between leaves on a random Yule tree.
Sheinman, Michael; Massip, Florian; Arndt, Peter F
2015-01-01
A Yule tree is the result of a branching process with constant birth and death rates. Such a process serves as an instructive null model of many empirical systems, for instance, the evolution of species leading to a phylogenetic tree. However, often in phylogeny the only available information is the pairwise distances between a small fraction of extant species representing the leaves of the tree. In this article we study statistical properties of the pairwise distances in a Yule tree. Using a method based on a recursion, we derive an exact, analytic and compact formula for the expected number of pairs separated by a certain time distance. This number turns out to follow a increasing exponential function. This property of a Yule tree can serve as a simple test for empirical data to be well described by a Yule process. We further use this recursive method to calculate the expected number of the n-most closely related pairs of leaves and the number of cherries separated by a certain time distance. To make our results more useful for realistic scenarios, we explicitly take into account that the leaves of a tree may be incompletely sampled and derive a criterion for poorly sampled phylogenies. We show that our result can account for empirical data, using two families of birds species.
Statistical Properties of Pairwise Distances between Leaves on a Random Yule Tree
Sheinman, Michael; Massip, Florian; Arndt, Peter F.
2015-01-01
A Yule tree is the result of a branching process with constant birth and death rates. Such a process serves as an instructive null model of many empirical systems, for instance, the evolution of species leading to a phylogenetic tree. However, often in phylogeny the only available information is the pairwise distances between a small fraction of extant species representing the leaves of the tree. In this article we study statistical properties of the pairwise distances in a Yule tree. Using a method based on a recursion, we derive an exact, analytic and compact formula for the expected number of pairs separated by a certain time distance. This number turns out to follow a increasing exponential function. This property of a Yule tree can serve as a simple test for empirical data to be well described by a Yule process. We further use this recursive method to calculate the expected number of the n-most closely related pairs of leaves and the number of cherries separated by a certain time distance. To make our results more useful for realistic scenarios, we explicitly take into account that the leaves of a tree may be incompletely sampled and derive a criterion for poorly sampled phylogenies. We show that our result can account for empirical data, using two families of birds species. PMID:25826216
Walton, Jay R; Rivera-Rivera, Luis A; Lucchese, Robert R; Bevan, John W
2016-05-26
Force-based canonical approaches have recently given a unified but different viewpoint on the nature of bonding in pairwise interatomic interactions. Differing molecular categories (covalent, ionic, van der Waals, hydrogen, and halogen bonding) of representative interatomic interactions with binding energies ranging from 1.01 to 1072.03 kJ/mol have been modeled canonically giving a rigorous semiempirical verification to high accuracy. However, the fundamental physical basis expected to provide the inherent characteristics of these canonical transformations has not yet been elucidated. Subsequently, it was shown through direct numerical differentiation of these potentials that their associated force curves have canonical shapes. However, this approach to analyzing force results in inherent loss of accuracy coming from numerical differentiation of the potentials. We now show that this serious obstruction can be avoided by directly demonstrating the canonical nature of force distributions from the perspective of the Hellmann-Feynman theorem. This requires only differentiation of explicitly known Coulombic potentials, and we discuss how this approach to canonical forces can be used to further explain the nature of chemical bonding in pairwise interatomic interactions. All parameter values used in the canonical transformation are determined through explicit physical based algorithms, and it does not require direct consideration of electron correlation effects.
Determination of ensemble-average pairwise root mean-square deviation from experimental B-factors.
Kuzmanic, Antonija; Zagrovic, Bojan
2010-03-03
Root mean-square deviation (RMSD) after roto-translational least-squares fitting is a measure of global structural similarity of macromolecules used commonly. On the other hand, experimental x-ray B-factors are used frequently to study local structural heterogeneity and dynamics in macromolecules by providing direct information about root mean-square fluctuations (RMSF) that can also be calculated from molecular dynamics simulations. We provide a mathematical derivation showing that, given a set of conservative assumptions, a root mean-square ensemble-average of an all-against-all distribution of pairwise RMSD for a single molecular species,
Janes, Holly; Pepe, Margaret S
2009-06-01
Recent scientific and technological innovations have produced an abundance of potential markers that are being investigated for their use in disease screening and diagnosis. In evaluating these markers, it is often necessary to account for covariates associated with the marker of interest. Covariates may include subject characteristics, expertise of the test operator, test procedures or aspects of specimen handling. In this paper, we propose the covariate-adjusted receiver operating characteristic curve, a measure of covariate-adjusted classification accuracy. Nonparametric and semiparametric estimators are proposed, asymptotic distribution theory is provided and finite sample performance is investigated. For illustration we characterize the age-adjusted discriminatory accuracy of prostate-specific antigen as a biomarker for prostate cancer.
ProADD: A database on Protein Aggregation Diseases
Shobana, Ramesh; Pandaranayaka, Eswari PJ
2014-01-01
ProADD, a database for protein aggregation diseases, is developed to organize the data under a single platform to facilitate easy access for researchers. Diseases caused due to protein aggregation and the proteins involved in each of these diseases are integrated. The database helps in classification of proteins involved in the protein aggregation diseases based on sequence and structural analysis. Analysis of proteins can be done to mine patterns prevailing among the aggregating proteins. Availability http://bicmku.in/ProADD PMID:25097386
Ly, Cheng; Middleton, Jason W; Doiron, Brent
2012-01-01
The responses of cortical neurons are highly variable across repeated presentations of a stimulus. Understanding this variability is critical for theories of both sensory and motor processing, since response variance affects the accuracy of neural codes. Despite this influence, the cellular and circuit mechanisms that shape the trial-to-trial variability of population responses remain poorly understood. We used a combination of experimental and computational techniques to uncover the mechanisms underlying response variability of populations of pyramidal (E) cells in layer 2/3 of rat whisker barrel cortex. Spike trains recorded from pairs of E-cells during either spontaneous activity or whisker deflected responses show similarly low levels of spiking co-variability, despite large differences in network activation between the two states. We developed network models that show how spike threshold non-linearities dilute E-cell spiking co-variability during spontaneous activity and low velocity whisker deflections. In contrast, during high velocity whisker deflections, cancelation mechanisms mediated by feedforward inhibition maintain low E-cell pairwise co-variability. Thus, the combination of these two mechanisms ensure low E-cell population variability over a wide range of whisker deflection velocities. Finally, we show how this active decorrelation of population variability leads to a drastic increase in the population information about whisker velocity. The prevalence of spiking non-linearities and feedforward inhibition in the nervous system suggests that the mechanisms for low network variability presented in our study may generalize throughout the brain.
Ly, Cheng; Middleton, Jason W.; Doiron, Brent
2012-01-01
The responses of cortical neurons are highly variable across repeated presentations of a stimulus. Understanding this variability is critical for theories of both sensory and motor processing, since response variance affects the accuracy of neural codes. Despite this influence, the cellular and circuit mechanisms that shape the trial-to-trial variability of population responses remain poorly understood. We used a combination of experimental and computational techniques to uncover the mechanisms underlying response variability of populations of pyramidal (E) cells in layer 2/3 of rat whisker barrel cortex. Spike trains recorded from pairs of E-cells during either spontaneous activity or whisker deflected responses show similarly low levels of spiking co-variability, despite large differences in network activation between the two states. We developed network models that show how spike threshold non-linearities dilute E-cell spiking co-variability during spontaneous activity and low velocity whisker deflections. In contrast, during high velocity whisker deflections, cancelation mechanisms mediated by feedforward inhibition maintain low E-cell pairwise co-variability. Thus, the combination of these two mechanisms ensure low E-cell population variability over a wide range of whisker deflection velocities. Finally, we show how this active decorrelation of population variability leads to a drastic increase in the population information about whisker velocity. The prevalence of spiking non-linearities and feedforward inhibition in the nervous system suggests that the mechanisms for low network variability presented in our study may generalize throughout the brain. PMID:22408615
Gao, Feng; Manatunga, Amita K; Chen, Shande
2007-02-20
Often in many biomedical and epidemiologic studies, estimating hazards function is of interest. The Breslow's estimator is commonly used for estimating the integrated baseline hazard, but this estimator requires the functional form of covariate effects to be correctly specified. It is generally difficult to identify the true functional form of covariate effects in the presence of time-dependent covariates. To provide a complementary method to the traditional proportional hazard model, we propose a tree-type method which enables simultaneously estimating both baseline hazards function and the effects of time-dependent covariates. Our interest will be focused on exploring the potential data structures rather than formal hypothesis testing. The proposed method approximates the baseline hazards and covariate effects with step-functions. The jump points in time and in covariate space are searched via an algorithm based on the improvement of the full log-likelihood function. In contrast to most other estimating methods, the proposed method estimates the hazards function rather than integrated hazards. The method is applied to model the risk of withdrawal in a clinical trial that evaluates the anti-depression treatment in preventing the development of clinical depression. Finally, the performance of the method is evaluated by several simulation studies.
Construction of Covariance Functions with Variable Length Fields
NASA Technical Reports Server (NTRS)
Gaspari, Gregory; Cohn, Stephen E.; Guo, Jing; Pawson, Steven
2005-01-01
This article focuses on construction, directly in physical space, of three-dimensional covariance functions parametrized by a tunable length field, and on an application of this theory to reproduce the Quasi-Biennial Oscillation (QBO) in the Goddard Earth Observing System, Version 4 (GEOS-4) data assimilation system. These Covariance models are referred to as multi-level or nonseparable, to associate them with the application where a multi-level covariance with a large troposphere to stratosphere length field gradient is used to reproduce the QBO from sparse radiosonde observations in the tropical lower stratosphere. The multi-level covariance functions extend well-known single level covariance functions depending only on a length scale. Generalizations of the first- and third-order autoregressive covariances in three dimensions are given, providing multi-level covariances with zero and three derivatives at zero separation, respectively. Multi-level piecewise rational covariances with two continuous derivatives at zero separation are also provided. Multi-level powerlaw covariances are constructed with continuous derivatives of all orders. Additional multi-level covariance functions are constructed using the Schur product of single and multi-level covariance functions. A multi-level powerlaw covariance used to reproduce the QBO in GEOS-4 is described along with details of the assimilation experiments. The new covariance model is shown to represent the vertical wind shear associated with the QBO much more effectively than in the baseline GEOS-4 system.
Enhancing Teaching using MATLAB Add-Ins for Excel
ERIC Educational Resources Information Center
Hamilton, Paul V.
2004-01-01
In this paper I will illustrate how to extend the capabilities of Microsoft Excel spreadsheets with add-ins created by MATLAB. Excel provides a broad array of fundamental tools but often comes up short when more sophisticated scenarios are involved. To overcome this short-coming of Excel while retaining its ease of use, I will describe how…
Four Simple Ways to Add Movement in Daily Lessons
ERIC Educational Resources Information Center
Helgeson, John
2011-01-01
Adding movement to classroom activities not only engages students, but also may improve the classroom climate and reduce disruptions. In this article, the author discusses four simple activities to add movement in daily lessons. These activities are: (1) Vocabulary/Notes around the Room; (2) Cooperative Learning: Posting Task Assignments; (3)…
Accommodating College Students with Learning Disabilities: ADD, ADHD, and Dyslexia
ERIC Educational Resources Information Center
Vickers, Melana Zyla
2010-01-01
Universities are providing extra time on tests, quiet exam rooms, in-class note-takers, and other assistance to college students with modest learning disabilities. But these policies are shrouded in secrecy. This paper, "Accommodating College Students with Learning Disabilities: ADD, ADHD, and Dyslexia," by Melana Zyla Vickers, examines…
Medicalised Pupils: The Case of ADD/ADHD
ERIC Educational Resources Information Center
Kristjansson, Kristjan
2009-01-01
Recent decades have seen an increasing number of life's problems conceptualised and interpreted through the prism of disease; among them are those affecting pupils at school. Witness the cases of hyperactivity and deficient attention, so often diagnosed as ADD/ADHD. Research indicates that there is at least some tendency towards overdiagnosis of…
Water Softeners: How Much Sodium Do They Add?
... healthy eating I'm trying to watch the sodium in my diet. Should I be concerned about sodium from water softeners? Answers from Sheldon G. Sheps, M.D. Regular tap water contains very little sodium. The amount of sodium a water softener adds ...
Face equipment lighting: integrated vs. add-on
Scott, F.E.
1982-10-01
The problems of providing lighting on face equipment are examined. In the US, some equipment manufacturers are building-in lighting systems to their machinery; others will install lighting systems as add-on or retrofitted items. The pros and cons of each method are examined and the views of manufacturers are quoted.
Mode-routed fiber-optic add-drop filter
NASA Technical Reports Server (NTRS)
Moslehi, Behzad (Inventor); Black, Richard James (Inventor); Shaw, Herbert John (Inventor)
2000-01-01
New elements mode-converting two-mode grating and mode-filtering two-mode coupler are disclosed and used as elements in a system for communications, add-drop filtering, and strain sensing. Methods of fabrication for these new two-mode gratings and mode-filtering two-mode couplers are also disclosed.
ADD and ADHD: An Overview for School Counselors. ERIC Digest.
ERIC Educational Resources Information Center
Pledge, Deanna S.
School counselors are often consultants for parents and teachers on problems that children and adolescents face. Attention deficit disorder (ADD) is one such problem. It is frequently misunderstood, presenting a challenge for parents and teachers alike. The counselor is a resource for initial identification and interventions at home and in the…
Serving Students Diagnosed with ADD: Avoiding Deficits in Professional Attention.
ERIC Educational Resources Information Center
Stoner, Gary; Carey, Sean P.
1992-01-01
Responds to previous article (Hakola, this issue) on legal rights of students with Attention Deficit Disorder (ADD). Presents contrasting perspective on educational services for children diagnosed with Attention Deficit Hyperactivity Disorder, linked more closely to professional research and practice than to law. Concerns discussed are grounded in…
Meeting Learning Challenges: Working with the Child Who Has ADD
ERIC Educational Resources Information Center
Greenspan, Stanley I.
2006-01-01
The terms ADD (Attention Deficit Disorder) and ADHD (Attention Deficit Hyperactivity Disorder) are applied to several symptoms, including: difficulty in paying attention, distractibility, having a hard time following through on things, and sometimes over-activity and impulsivity. There are many different reasons why children have these symptoms.…
Lorentz Covariant Distributions with Spectral Conditions
Zinoviev, Yury M.
2007-11-14
The properties of the vacuum expectation values of products of the quantum fields are formulated in the book [1]. The vacuum expectation values of quantum fields products would be the Fourier transforms of the Lorentz covariant tempered distributions with supports in the product of the closed upper light cones. Lorentz invariant distributions are studied in the papers [2]--[4]. The authors of these papers wanted to describe Lorentz invariant distributions in terms of distributions given on the Lorentz group orbit space. This orbit space has a complicated structure. It is noted [5] that a tempered distribution with support in the closed upper light cone may be represented as the action of the wave operator in some power on a differentiable function with support in the closed upper light cone. For the description of the Lorentz covariant differentiable functions the boundary of the closed upper light cone is not important. The measure of this boundary is zero.
RNA sequence analysis using covariance models.
Eddy, S R; Durbin, R
1994-01-01
We describe a general approach to several RNA sequence analysis problems using probabilistic models that flexibly describe the secondary structure and primary sequence consensus of an RNA sequence family. We call these models 'covariance models'. A covariance model of tRNA sequences is an extremely sensitive and discriminative tool for searching for additional tRNAs and tRNA-related sequences in sequence databases. A model can be built automatically from an existing sequence alignment. We also describe an algorithm for learning a model and hence a consensus secondary structure from initially unaligned example sequences and no prior structural information. Models trained on unaligned tRNA examples correctly predict tRNA secondary structure and produce high-quality multiple alignments. The approach may be applied to any family of small RNA sequences. Images PMID:8029015
Chiral four-dimensional heterotic covariant lattices
NASA Astrophysics Data System (ADS)
Beye, Florian
2014-11-01
In the covariant lattice formalism, chiral four-dimensional heterotic string vacua are obtained from certain even self-dual lattices which completely decompose into a left-mover and a right-mover lattice. The main purpose of this work is to classify all right-mover lattices that can appear in such a chiral model, and to study the corresponding left-mover lattices using the theory of lattice genera. In particular, the Smith-Minkowski-Siegel mass formula is employed to calculate a lower bound on the number of left-mover lattices. Also, the known relationship between asymmetric orbifolds and covariant lattices is considered in the context of our classification.
On covariance structure in noisy, big data
NASA Astrophysics Data System (ADS)
Paffenroth, Randy C.; Nong, Ryan; Du Toit, Philip C.
2013-09-01
Herein we describe theory and algorithms for detecting covariance structures in large, noisy data sets. Our work uses ideas from matrix completion and robust principal component analysis to detect the presence of low-rank covariance matrices, even when the data is noisy, distorted by large corruptions, and only partially observed. In fact, the ability to handle partial observations combined with ideas from randomized algorithms for matrix decomposition enables us to produce asymptotically fast algorithms. Herein we will provide numerical demonstrations of the methods and their convergence properties. While such methods have applicability to many problems, including mathematical finance, crime analysis, and other large-scale sensor fusion problems, our inspiration arises from applying these methods in the context of cyber network intrusion detection.
Low-Fidelity Covariances: Neutron Cross Section Covariance Estimates for 387 Materials
The Low-fidelity Covariance Project (Low-Fi) was funded in FY07-08 by DOEÆs Nuclear Criticality Safety Program (NCSP). The project was a collaboration among ANL, BNL, LANL, and ORNL. The motivation for the Low-Fi project stemmed from an imbalance in supply and demand of covariance data. The interest in, and demand for, covariance data has been in a continual uptrend over the past few years. Requirements to understand application-dependent uncertainties in simulated quantities of interest have led to the development of sensitivity / uncertainty and data adjustment software such as TSUNAMI [1] at Oak Ridge. To take full advantage of the capabilities of TSUNAMI requires general availability of covariance data. However, the supply of covariance data has not been able to keep up with the demand. This fact is highlighted by the observation that the recent release of the much-heralded ENDF/B-VII.0 included covariance data for only 26 of the 393 neutron evaluations (which is, in fact, considerably less covariance data than was included in the final ENDF/B-VI release).[Copied from R.C. Little et al., "Low-Fidelity Covariance Project", Nuclear Data Sheets 109 (2008) 2828-2833] The Low-Fi covariance data are now available at the National Nuclear Data Center. They are separate from ENDF/B-VII.0 and the NNDC warns that this information is not approved by CSEWG. NNDC describes the contents of this collection as: "Covariance data are provided for radiative capture (or (n,ch.p.) for light nuclei), elastic scattering (or total for some actinides), inelastic scattering, (n,2n) reactions, fission and nubars over the energy range from 10(-5{super}) eV to 20 MeV. The library contains 387 files including almost all (383 out of 393) materials of the ENDF/B-VII.0. Absent are data for (7{super})Li, (232{super})Th, (233,235,238{super})U and (239{super})Pu as well as (223,224,225,226{super})Ra, while (nat{super})Zn is replaced by (64,66,67,68,70{super})Zn
Torsion and geometrostasis in covariant superstrings
Zachos, C.
1985-01-01
The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.
Discrete symmetries in covariant loop quantum gravity
NASA Astrophysics Data System (ADS)
Rovelli, Carlo; Wilson-Ewing, Edward
2012-09-01
We study time-reversal and parity—on the physical manifold and in internal space—in covariant loop gravity. We consider a minor modification of the Holst action which makes it transform coherently under such transformations. The classical theory is not affected but the quantum theory is slightly different. In particular, the simplicity constraints are slightly modified and this restricts orientation flips in a spin foam to occur only across degenerate regions, thus reducing the sources of potential divergences.
Covariance expressions for eigenvalue and eigenvector problems
NASA Astrophysics Data System (ADS)
Liounis, Andrew J.
There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.
Linear Covariance Analysis for a Lunar Lander
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael
2017-01-01
A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.
Inverse covariance simplification for efficient uncertainty management
NASA Astrophysics Data System (ADS)
Jalobeanu, A.; Gutiérrez, J. A.
2007-11-01
When it comes to manipulating uncertain knowledge such as noisy observations of physical quantities, one may ask how to do it in a simple way. Processing corrupted signals or images always propagates the uncertainties from the data to the final results, whether these errors are explicitly computed or not. When such error estimates are provided, it is crucial to handle them in such a way that their interpretation, or their use in subsequent processing steps, remain user-friendly and computationally tractable. A few authors follow a Bayesian approach and provide uncertainties as an inverse covariance matrix. Despite its apparent sparsity, this matrix contains many small terms that carry little information. Methods have been developed to select the most significant entries, through the use of information-theoretic tools for instance. One has to find a Gaussian pdf that is close enough to the posterior pdf, and with a small number of non-zero coefficients in the inverse covariance matrix. We propose to restrict the search space to Markovian models (where only neighbors can interact), well-suited to signals or images. The originality of our approach is in conserving the covariances between neighbors while setting to zero the entries of the inverse covariance matrix for all other variables. This fully constrains the solution, and the computation is performed via a fast, alternate minimization scheme involving quadratic forms. The Markovian structure advantageously reduces the complexity of Bayesian updating (where the simplified pdf is used as a prior). Moreover, uncertainties exhibit the same temporal or spatial structure as the data.
Covariant quantization of the CBS superparticle
NASA Astrophysics Data System (ADS)
Grassi, P. A.; Policastro, G.; Porrati, M.
2001-07-01
The quantization of the Casalbuoni-Brink-Schwarz superparticle is performed in an explicitly covariant way using the antibracket formalism. Since an infinite number of ghost fields are required, within a suitable off-shell twistor-like formalism, we are able to fix the gauge of each ghost sector without modifying the physical content of the theory. The computation reveals that the antibracket cohomology contains only the physical degrees of freedom.
Twisted covariant noncommutative self-dual gravity
Estrada-Jimenez, S.; Garcia-Compean, H.; Obregon, O.; Ramirez, C.
2008-12-15
A twisted covariant formulation of noncommutative self-dual gravity is presented. The formulation for constructing twisted noncommutative Yang-Mills theories is used. It is shown that the noncommutative torsion is solved at any order of the {theta} expansion in terms of the tetrad and some extra fields of the theory. In the process the first order expansion in {theta} for the Plebanski action is explicitly obtained.
Potential of 13 linked autosomal short tandem repeat loci in pairwise kinship analysis.
Liu, Qiu-Ling; Xue, Li; Wu, Wei-Wei; He, Xin; Liu, Kai-Yan; Zhao, Hu; Lu, De-Jian
2016-10-01
In this study, a panel of 13 STR loci locate on chromosome 3, 4, and 17 (D3S2402, D3S2452, D3S1766, D3S4554, D3S2388, D3S3051, D3S3053, D4S2404, D4S2364, AC001348A, AC001348B, D17S975, and D17S1294) were assessed for pairwise kinship analysis. Map distances between these STR loci ranged from 0.07 cM to 97.03 cM. The population genetic study of Chinese Han population showed that linkage disequilibrium exists in two clusters of closely linked markers (D4S2404-D4S2364 and D17S975-D17S1294), in which the recombination fractions were 0.0026 and 0.0001, respectively. The recombination fractions derived from the Rutgers Map for the closely linked markers (genetic distance < 0.5 cM) were significant underestimates in comparison to those of direct observation of STR transmissions in families. When effect of linkage on pairwise kinship testing was evaluated by comparing likelihood ratio (LR) values taking linkage into account, overall LR values increased. But extremely low LRs were also observed. Finally, the power of the 13 STR loci to discriminate relationship among full-sibs, half-sibs, grandparent-grandchild, uncle-niece, and unrelated pairs was assessed with a category fraction. The results showed that about 72.64% of full-sib pairs and about 82.84% of unrelated pairs could be classified correctly. But the category fractions of second-degree relationships drastically reduced to 7.34-35.48%. If only pairs of grandparent-grandchild, half-sibs, and uncle-niece were distinguished, the category fraction was 0.5512, 0.1147, and 0.4362, respectively. Our study results demonstrated that linked STRs were helpful to differentiate the most frequent relationships in pairwise kinship analysis.
Development of covariance capabilities in EMPIRE code
Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.
2008-06-24
The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.
Using Covariance Analysis to Assess Pointing Performance
NASA Technical Reports Server (NTRS)
Bayard, David; Kang, Bryan
2009-01-01
A Pointing Covariance Analysis Tool (PCAT) has been developed for evaluating the expected performance of the pointing control system for NASA s Space Interferometry Mission (SIM). The SIM pointing control system is very complex, consisting of multiple feedback and feedforward loops, and operating with multiple latencies and data rates. The SIM pointing problem is particularly challenging due to the effects of thermomechanical drifts in concert with the long camera exposures needed to image dim stars. Other pointing error sources include sensor noises, mechanical vibrations, and errors in the feedforward signals. PCAT models the effects of finite camera exposures and all other error sources using linear system elements. This allows the pointing analysis to be performed using linear covariance analysis. PCAT propagates the error covariance using a Lyapunov equation associated with time-varying discrete and continuous-time system matrices. Unlike Monte Carlo analysis, which could involve thousands of computational runs for a single assessment, the PCAT analysis performs the same assessment in a single run. This capability facilitates the analysis of parametric studies, design trades, and "what-if" scenarios for quickly evaluating and optimizing the control system architecture and design.
Shrinkage covariance matrix approach for microarray data
NASA Astrophysics Data System (ADS)
Karjanto, Suryaefiza; Aripin, Rasimah
2013-04-01
Microarray technology was developed for the purpose of monitoring the expression levels of thousands of genes. A microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints including the high cost of producing microarray chips. As a result, the widely used standard covariance estimator is not appropriate for this purpose. One such technique is the Hotelling's T2 statistic which is a multivariate test statistic for comparing means between two groups. It requires that the number of observations (n) exceeds the number of genes (p) in the set but in microarray studies it is common that n < p. This leads to a biased estimate of the covariance matrix. In this study, the Hotelling's T2 statistic with the shrinkage approach is proposed to estimate the covariance matrix for testing differential gene expression. The performance of this approach is then compared with other commonly used multivariate tests using a widely analysed diabetes data set as illustrations. The results across the methods are consistent, implying that this approach provides an alternative to existing techniques.
All covariance controllers for linear discrete-time systems
NASA Technical Reports Server (NTRS)
Hsieh, Chen; Skelton, Robert E.
1990-01-01
The set of covariances that a linear discrete-time plant with a specified-order controller can have is characterized. The controllers that assign such covariances to any linear discrete-time system are given explicitly in closed form. The freedom in these covariance controllers is explicit and is parameterized by two orthogonal matrices. By appropriately choosing these free parameters, additional system objectives can be achieved without altering the state covariance, and the stability of the closed-loop system is guaranteed.
Factorization of the Discrete Noise Covariance Matrix for Plans,
1991-02-01
rapport prdsente la formulation exacte de la matrice de covariance Qk necessaire pour la propagation de la matrice de covariance du filtre Kalman ...approximation la d6composition necessaire pour utiliser la formulation Biermann-Agee-Turner du filtre Kalman . Cette decomposition approximative est...form of the discrete driving noise covariance matrix Qk which is needed to propagate the covariance matrix in the Kalman filter used by PLANS. It is
Hidden Covariation Detection Produces Faster, Not Slower, Social Judgments
ERIC Educational Resources Information Center
Barker, Lynne A.; Andrade, Jackie
2006-01-01
In P. Lewicki's (1986b) demonstration of hidden covariation detection (HCD), responses of participants were slower to faces that corresponded with a covariation encountered previously than to faces with novel covariations. This slowing contrasts with the typical finding that priming leads to faster responding and suggests that HCD is a unique type…
Earth Observation System Flight Dynamics System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
Covariate Selection in Propensity Scores Using Outcome Proxies
ERIC Educational Resources Information Center
Kelcey, Ben
2011-01-01
This study examined the practical problem of covariate selection in propensity scores (PSs) given a predetermined set of covariates. Because the bias reduction capacity of a confounding covariate is proportional to the concurrent relationships it has with the outcome and treatment, particular focus is set on how we might approximate…
Covariate Imbalance and Precision in Measuring Treatment Effects
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven
2011-01-01
Covariate adjustment can increase the precision of estimates by removing unexplained variance from the error in randomized experiments, although chance covariate imbalance tends to counteract the improvement in precision. The author develops an easy measure to examine chance covariate imbalance in randomization by standardizing the average…
a General Transformation to Canonical Form for Potentials in Pairwise Intermolecular Interactions
NASA Astrophysics Data System (ADS)
Walton, Jay R.; Rivera-Rivera, Luis A.; Lucchese, Robert R.; Bevan, John W.
2015-06-01
A generalized formulation of explicit transformations is introduced to investigate the concept of a canonical potential in both fundamental chemical and intermolecular bonding. Different classes of representative ground electronic state pairwise interatomic interactions are referenced to a single canonical potential illustrating application of explicit transformations. Specifically, accurately determined potentials of the diatomic molecules H_2, H_2^+, HF, LiH, argon dimer, and one-dimensional dissociative coordinates in Ar-HBr, OC-HF, and OC-Cl_2 are investigated throughout their bound potentials. The advantages of the current formulation for accurately evaluating equilibrium dissociation energies and a fundamentally different unified perspective on nature of intermolecular interactions will be emphasized. In particular, this canonical approach has relevance to previous assertions that there is no very fundamental distinction between van der Waals bonding and covalent bonding or for that matter hydrogen and halogen bonds.
On the sufficiency of pairwise interactions in maximum entropy models of networks
NASA Astrophysics Data System (ADS)
Nemenman, Ilya; Merchan, Lina
Biological information processing networks consist of many components, which are coupled by an even larger number of complex multivariate interactions. However, analyses of data sets from fields as diverse as neuroscience, molecular biology, and behavior have reported that observed statistics of states of some biological networks can be approximated well by maximum entropy models with only pairwise interactions among the components. Based on simulations of random Ising spin networks with p-spin (p > 2) interactions, here we argue that this reduction in complexity can be thought of as a natural property of some densely interacting networks in certain regimes, and not necessarily as a special property of living systems. This work was supported in part by James S. McDonnell Foundation Grant No. 220020321.
Matrix multiplication operations using pair-wise load and splat operations
Eichenberger, Alexandre E.; Gschwind, Michael K.; Gunnels, John A.; Salapura, Valentina
2017-03-21
Mechanisms for performing a matrix multiplication operation are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A pair-wise load and splat operation is performed to load a pair of scalar values of a second vector operand and replicate the pair of scalar values within a second target vector register. An operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product is accumulated with other partial products and a resulting accumulated partial product is stored. This operation may be repeated for a second pair of scalar values of the second vector operand.
Hou, Fujun
2016-01-01
This paper provides a description of how market competitiveness evaluations concerning mechanical equipment can be made in the context of multi-criteria decision environments. It is assumed that, when we are evaluating the market competitiveness, there are limited number of candidates with some required qualifications, and the alternatives will be pairwise compared on a ratio scale. The qualifications are depicted as criteria in hierarchical structure. A hierarchical decision model called PCbHDM was used in this study based on an analysis of its desirable traits. Illustration and comparison shows that the PCbHDM provides a convenient and effective tool for evaluating the market competitiveness of mechanical equipment. The researchers and practitioners might use findings of this paper in application of PCbHDM. PMID:26783751
Shaw, David E
2005-10-01
Classical molecular dynamics simulations of biological macromolecules in explicitly modeled solvent typically require the evaluation of interactions between all pairs of atoms separated by no more than some distance R, with more distant interactions handled using some less expensive method. Performing such simulations for periods on the order of a millisecond is likely to require the use of massive parallelism. The extent to which such simulations can be efficiently parallelized, however, has historically been limited by the time required for interprocessor communication. This article introduces a new method for the parallel evaluation of distance-limited pairwise particle interactions that significantly reduces the amount of data transferred between processors by comparison with traditional methods. Specifically, the amount of data transferred into and out of a given processor scales as O(R(3/2)p(-1/2)), where p is the number of processors, and with constant factors that should yield a substantial performance advantage in practice.
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models
Stein, Richard R.; Marks, Debora S.; Sander, Chris
2015-01-01
Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene–gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design. PMID:26225866
On the Sufficiency of Pairwise Interactions in Maximum Entropy Models of Networks
NASA Astrophysics Data System (ADS)
Merchan, Lina; Nemenman, Ilya
2016-03-01
Biological information processing networks consist of many components, which are coupled by an even larger number of complex multivariate interactions. However, analyses of data sets from fields as diverse as neuroscience, molecular biology, and behavior have reported that observed statistics of states of some biological networks can be approximated well by maximum entropy models with only pairwise interactions among the components. Based on simulations of random Ising spin networks with p-spin (p>2) interactions, here we argue that this reduction in complexity can be thought of as a natural property of densely interacting networks in certain regimes, and not necessarily as a special property of living systems. By connecting our analysis to the theory of random constraint satisfaction problems, we suggest a reason for why some biological systems may operate in this regime.
General parity between trio and pairwise breeding of laboratory mice in static caging.
Kedl, Ross M; Wysocki, Lawrence J; Janssen, William J; Born, Willi K; Rosenbaum, Matthew D; Granowski, Julia; Kench, Jennifer A; Fong, Derek L; Switzer, Lisa A; Cruse, Margaret; Huang, Hua; Jakubzick, Claudia V; Kosmider, Beata; Takeda, Katsuyuki; Stranova, Thomas J; Klumm, Randal C; Delgado, Christine; Tummala, Saigiridhar; De Langhe, Stijn; Cambier, John; Haskins, Katherine; Lenz, Laurel L; Curran-Everett, Douglas
2014-11-15
Changes made in the 8th edition of the Guide for the Care and Use of Laboratory Animals included new recommendations for the amount of space for breeding female mice. Adopting the new recommendations required, in essence, the elimination of trio breeding practices for all institutions. Both public opinion and published data did not readily support the new recommendations. In response, the National Jewish Health Institutional Animal Care and Use Committee established a program to directly compare the effects of breeding format on mouse pup survival and growth. Our study showed an overall parity between trio and pairwise breeding formats on the survival and growth of the litters, suggesting that the housing recommendations for breeding female mice as stated in the current Guide for the Care and Use of Laboratory Animals should be reconsidered.
PAirwise Sequence Comparison (PASC) and its application in the classification of filoviruses.
Bao, Yiming; Chetvernin, Vyacheslav; Tatusova, Tatiana
2012-08-01
PAirwise Sequence Comparison (PASC) is a tool that uses genome sequence similarity to help with virus classification. The PASC tool at NCBI uses two methods: local alignment based on BLAST and global alignment based on Needleman-Wunsch algorithm. It works for complete genomes of viruses of several families/groups, and for the family of Filoviridae, it currently includes 52 complete genomes available in GenBank. It has been shown that BLAST-based alignment approach works better for filoviruses, and therefore is recommended for establishing taxon demarcations criteria. When more genome sequences with high divergence become available, these demarcation will most likely become more precise. The tool can compare new genome sequences of filoviruses with the ones already in the database, and propose their taxonomic classification.
Hou, Fujun
2016-01-01
This paper provides a description of how market competitiveness evaluations concerning mechanical equipment can be made in the context of multi-criteria decision environments. It is assumed that, when we are evaluating the market competitiveness, there are limited number of candidates with some required qualifications, and the alternatives will be pairwise compared on a ratio scale. The qualifications are depicted as criteria in hierarchical structure. A hierarchical decision model called PCbHDM was used in this study based on an analysis of its desirable traits. Illustration and comparison shows that the PCbHDM provides a convenient and effective tool for evaluating the market competitiveness of mechanical equipment. The researchers and practitioners might use findings of this paper in application of PCbHDM.
Parasail: SIMD C library for global, semi-global, and local pairwise sequence alignments
Daily, Jeffrey A.
2016-02-10
Sequence alignment algorithms are a key component of many bioinformatics applications. Though various fast Smith-Waterman local sequence alignment implementations have been developed for x86 CPUs, most are embedded into larger database search tools. In addition, fast implementations of Needleman-Wunsch global sequence alignment and its semi-global variants are not as widespread. This article presents the first software library for local, global, and semi-global pairwise intra-sequence alignments and improves the performance of previous intra-sequence implementations. As a result, a faster intra-sequence pairwise alignment implementation is described and benchmarked. Using a 375 residue query sequence a speed of 136 billion cell updates permore » second (GCUPS) was achieved on a dual Intel Xeon E5-2670 12-core processor system, the highest reported for an implementation based on Farrar’s ’striped’ approach. When using only a single thread, parasail was 1.7 times faster than Rognes’s SWIPE. For many score matrices, parasail is faster than BLAST. The software library is designed for 64 bit Linux, OS X, or Windows on processors with SSE2, SSE41, or AVX2. Source code is available from https://github.com/jeffdaily/parasail under the Battelle BSD-style license. In conclusion, applications that require optimal alignment scores could benefit from the improved performance. For the first time, SIMD global, semi-global, and local alignments are available in a stand-alone C library.« less
Feature-based pairwise retinal image registration by radial distortion correction
NASA Astrophysics Data System (ADS)
Lee, Sangyeol; Abràmoff, Michael D.; Reinhardt, Joseph M.
2007-03-01
Fundus camera imaging is widely used to document disorders such as diabetic retinopathy and macular degeneration. Multiple retinal images can be combined together through a procedure known as mosaicing to form an image with a larger field of view. Mosaicing typically requires multiple pairwise registrations of partially overlapped images. We describe a new method for pairwise retinal image registration. The proposed method is unique in that the radial distortion due to image acquisition is corrected prior to the geometric transformation. Vessel lines are detected using the Hessian operator and are used as input features to the registration. Since the overlapping region is typically small in a retinal image pair, only a few correspondences are available, thus limiting the applicable model to an afine transform at best. To recover the distortion due to curved-surface of retina and lens optics, a combined approach of an afine model with a radial distortion correction is proposed. The parameters of the image acquisition and radial distortion models are estimated during an optimization step that uses Powell's method driven by the vessel line distance. Experimental results using 20 pairs of green channel images acquired from three subjects with a fundus camera confirmed that the afine model with distortion correction could register retinal image pairs to within 1.88+/-0.35 pixels accuracy (mean +/- standard deviation) assessed by vessel line error, which is 17% better than the afine-only approach. Because the proposed method needs only two correspondences, it can be applied to obtain good registration accuracy even in the case of small overlap between retinal image pairs.
Hu, Zhonghan
2014-12-09
We present a unified derivation of the Ewald sum for electrostatics in a three-dimensional infinite system that is periodic in one, two, or three dimensions. The derivation leads to the Ewald3D sum being expressed as a sum of a real space contribution and a reciprocal space contribution, as in previous work. However, the k → 0 term in the reciprocal space contribution is analyzed further and found to give an additional contribution that is not part of previous reciprocal space contributions. The transparent derivation provides a unified view of the existing conducting infinite boundary term, the vacuum spherical infinite boundary term and the vacuum planar infinite boundary term for the Ewald3D sum. The derivation further explains that the infinite boundary term is conditional for the Ewald3D sum because it depends on the asymptotic behavior that the system approaches the infinite in 3D but it becomes a definite term for the Ewald2D or Ewald1D sum irrespective of the asymptotic behavior in the reduced dimensions. Moreover, the unified derivation yields two formulas for the Ewald sum in one-dimensional periodicity, and we rigorously prove that the two formulas are equivalent. These formulas might be useful for simulations of organic crystals with wirelike shapes or liquids confined in uniform cylinders. More importantly, the Ewald3D, Ewald2D, and Ewald1D sums are further written as sums of well-defined pairwise potentials overcoming the difficulty in splitting the total Coulomb potential energy into contributions from each individual group of charges. The pairwise interactions with their clear physical meaning of the explicit presence of the periodic images thus can be used to consistently perform analysis based on the trajectories from computer simulations of bulk or interfaces.
The pairwise velocity difference of over 2000 BHB stars in the Milky Way halo
NASA Astrophysics Data System (ADS)
Xue, Xiang-Xiang; Rix, Hans-Walter; Zhao, Gang
2009-11-01
Models of hierarchical galaxy formation predict that the extended stellar halos of galaxies like our Milky Way show a great deal of sub-structure, arising from disrupted satellites. Spatial sub-structure is directly observed, and has been quantified, in the Milky Way's stellar halo. Phase-space conservation implies that there should be sub-structure in position-velocity space. Here, we aim to quantify such position-velocity sub-structure, using a state-of-the art data set having over 2000 blue horizontal branch (BHB) stars with photometry and spectroscopy from SDSS. For stars in dynamically cold streams (“young" streams), we expect that pairs of objects that are physically close also have similar velocities. Therefore, we apply the well-established “pairwise velocity difference" (PVD) statistic <|ΔVlos|> (Δr), where we expect <|ΔVlod|> to drop for small separations Δr. We calculate the PVD for the SDSS BHB sample and find <|ΔVlos|> (Δr) approx const., i.e. no such signal. By making mock-observations of the simulations by Bullock & Johnston and applying the same statistic, we show that for individual, dynamically young streams, or assemblages of such streams, <|ΔVlod|> drops for small distance separations Δr, as qualitatively expected. However, for a realistic complete set of halo streams, the pair-wise velocity difference shows no signal, as the simulated halos are dominated by “dynamically old" phase-mixed streams. Our findings imply that the sparse sampling and the sample sizes in SDSS DR6 are still insufficient to use the position-velocity sub-structure for a stringent quantitative data-model comparison. Therefore, alternate statistics must be explored and much more densely sampled surveys, dedicated to the structure of the Milky Way, such as LAMOST, are needed.
Add Control: plant virtualization for control solutions in WWTP.
Maiza, M; Bengoechea, A; Grau, P; De Keyser, W; Nopens, I; Brockmann, D; Steyer, J P; Claeys, F; Urchegui, G; Fernández, O; Ayesa, E
2013-01-01
This paper summarizes part of the research work carried out in the Add Control project, which proposes an extension of the wastewater treatment plant (WWTP) models and modelling architectures used in traditional WWTP simulation tools, addressing, in addition to the classical mass transformations (transport, physico-chemical phenomena, biological reactions), all the instrumentation, actuation and automation & control components (sensors, actuators, controllers), considering their real behaviour (signal delays, noise, failures and power consumption of actuators). Its ultimate objective is to allow a rapid transition from the simulation of the control strategy to its implementation at full-scale plants. Thus, this paper presents the application of the Add Control simulation platform for the design and implementation of new control strategies at the WWTP of Mekolalde.
Randomized Controlled Trials of Add-On Antidepressants in Schizophrenia
Joffe, Grigori; Stenberg, Jan-Henry
2015-01-01
Background: Despite adequate treatment with antipsychotics, a substantial number of patients with schizophrenia demonstrate only suboptimal clinical outcome. To overcome this challenge, various psychopharmacological combination strategies have been used, including antidepressants added to antipsychotics. Methods: To analyze the efficacy of add-on antidepressants for the treatment of negative, positive, cognitive, depressive, and antipsychotic-induced extrapyramidal symptoms in schizophrenia, published randomized controlled trials assessing the efficacy of adjunctive antidepressants in schizophrenia were reviewed using the following parameters: baseline clinical characteristics and number of patients, their on-going antipsychotic treatment, dosage of the add-on antidepressants, duration of the trial, efficacy measures, and outcomes. Results: There were 36 randomized controlled trials reported in 41 journal publications (n=1582). The antidepressants used were the selective serotonin reuptake inhibitors, duloxetine, imipramine, mianserin, mirtazapine, nefazodone, reboxetin, trazodone, and bupropion. Mirtazapine and mianserin showed somewhat consistent efficacy for negative symptoms and both seemed to enhance neurocognition. Trazodone and nefazodone appeared to improve the antipsychotics-induced extrapyramidal symptoms. Imipramine and duloxetine tended to improve depressive symptoms. No clear evidence supporting selective serotonin reuptake inhibitors’ efficacy on any clinical domain of schizophrenia was found. Add-on antidepressants did not worsen psychosis. Conclusions: Despite a substantial number of randomized controlled trials, the overall efficacy of add-on antidepressants in schizophrenia remains uncertain mainly due to methodological issues. Some differences in efficacy on several schizophrenia domains seem, however, to exist and to vary by the antidepressant subgroups—plausibly due to differences in the mechanisms of action. Antidepressants may not worsen
Family nurse practitioners: "value add" in outpatient chronic disease management.
Stephens, Lynn
2012-12-01
Nurse practitioners are capable leaders in primary care design as practices nationwide move to consider and adopt the patient-centered medical home. The chronic care model provides a structure to enhance the care of chronic illness. Nurse practitioners are instrumental in many areas of this model as both leaders and caregivers. Safety and quality are basic medical home goals; nurse practitioners enhance both. The addition of a nurse practitioner to a practice is an effective "value add" in every way.
Stereovision Imaging in Smart Mobile Phone Using Add on Prisms
NASA Astrophysics Data System (ADS)
Bar-Magen Numhauser, Jonathan; Zalevsky, Zeev
2014-03-01
In this work we present the use of a prism-based add on component installed on top of a smart phone to achieve stereovision capabilities using iPhone mobile operating system. Through these components and the combination of the appropriate application programming interface and mathematical algorithms the obtained results will permit the analysis of possible enhancements for new uses to such system, in a variety of areas including medicine and communications.
Higher Curvature Effects in the ADD and RS Models
Rizzo, Thomas G.; /SLAC
2006-07-05
Over the last few years several extra-dimensional models have been introduced in attempt to deal with the hierarchy problem. These models can lead to rather unique and spectacular signatures at Terascale colliders such as the LHC and ILC. The ADD and RS models, though quite distinct, have many common feature including a constant curvature bulk, localized Standard Model(SM) fields and the assumption of the validity of the EH action as a description of gravitational interactions.
Add-on unidirectional elastic metamaterial plate cloak
Lee, Min Kyung; Kim, Yoon Young
2016-01-01
Metamaterial cloaks control the propagation of waves to make an object invisible or insensible. To manipulate elastic waves in space, a metamaterial cloak is typically embedded in a base system that includes or surrounds a target object. The embedding is undesirable because it structurally weakens or permanently alters the base system. In this study, we propose a new add-on metamaterial elastic cloak that can be placed over and mechanically coupled with a base structure without embedding. We designed an add-on type annular metamaterial plate cloak through conformal mapping, fabricated it and performed cloaking experiments in a thin-plate with a hole. Experiments were performed in a thin plate by using the lowest symmetric Lamb wave centered at 100 kHz. As a means to check the cloaking performance of the add-on elastic plate cloak, possibly as a temporary stress reliever or a so-called “stress bandage”, the degree of stress concentration mitigation and the recovery from the perturbed wave field due to a hole were investigated. PMID:26860896
Optical add/drop filter for wavelength division multiplexed systems
Deri, Robert J.; Strand, Oliver T.; Garrett, Henry E.
2002-01-01
An optical add/drop filter for wavelength division multiplexed systems and construction methods are disclosed. The add/drop filter includes a first ferrule having a first pre-formed opening for receiving a first optical fiber; an interference filter oriented to pass a first set of wavelengths along the first optical fiber and reflect a second set of wavelengths; and, a second ferrule having a second pre-formed opening for receiving the second optical fiber, and the reflected second set of wavelengths. A method for constructing the optical add/drop filter consists of the steps of forming a first set of openings in a first ferrule; inserting a first set of optical fibers into the first set of openings; forming a first set of guide pin openings in the first ferrule; dividing the first ferrule into a first ferrule portion and a second ferrule portion; forming an interference filter on the first ferrule portion; inserting guide pins through the first set of guide pin openings in the first ferrule portion and second ferrule portion to passively align the first set of optical fibers; removing material such that light reflected from the interference filter from the first set of optical fibers is accessible; forming a second set of openings in a second ferrule; inserting a second set of optical fibers into the second set of openings; and positioning the second ferrule with respect to the first ferrule such that the second set of optical fibers receive the light reflected from the interference filter.
Add-on unidirectional elastic metamaterial plate cloak.
Lee, Min Kyung; Kim, Yoon Young
2016-02-10
Metamaterial cloaks control the propagation of waves to make an object invisible or insensible. To manipulate elastic waves in space, a metamaterial cloak is typically embedded in a base system that includes or surrounds a target object. The embedding is undesirable because it structurally weakens or permanently alters the base system. In this study, we propose a new add-on metamaterial elastic cloak that can be placed over and mechanically coupled with a base structure without embedding. We designed an add-on type annular metamaterial plate cloak through conformal mapping, fabricated it and performed cloaking experiments in a thin-plate with a hole. Experiments were performed in a thin plate by using the lowest symmetric Lamb wave centered at 100 kHz. As a means to check the cloaking performance of the add-on elastic plate cloak, possibly as a temporary stress reliever or a so-called "stress bandage", the degree of stress concentration mitigation and the recovery from the perturbed wave field due to a hole were investigated.
40 CFR 75.34 - Units with add-on emission controls.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Units with add-on emission controls... add-on emission controls. (a) The owner or operator of an affected unit equipped with add-on SO2 and... which the add-on emission controls are documented to be operating properly, as described in the...
24 CFR 990.190 - Other formula expenses (add-ons).
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false Other formula expenses (add-ons... formula expenses (add-ons). In addition to calculating operating subsidy based on the PEL and UEL, a PHA's eligible formula expenses shall be increased by add-ons. The allowed add-ons are: (a) Self-sufficiency....
40 CFR 75.34 - Units with add-on emission controls.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Units with add-on emission controls... add-on emission controls. (a) The owner or operator of an affected unit equipped with add-on SO2 and... which the add-on emission controls are documented to be operating properly, as described in the...
40 CFR 75.34 - Units with add-on emission controls.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Units with add-on emission controls... add-on emission controls. (a) The owner or operator of an affected unit equipped with add-on SO2 and... which the add-on emission controls are documented to be operating properly, as described in the...
24 CFR 990.190 - Other formula expenses (add-ons).
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false Other formula expenses (add-ons... formula expenses (add-ons). In addition to calculating operating subsidy based on the PEL and UEL, a PHA's eligible formula expenses shall be increased by add-ons. The allowed add-ons are: (a) Self-sufficiency....
40 CFR 75.34 - Units with add-on emission controls.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Units with add-on emission controls... add-on emission controls. (a) The owner or operator of an affected unit equipped with add-on SO2 and... which the add-on emission controls are documented to be operating properly, as described in the...
24 CFR 990.190 - Other formula expenses (add-ons).
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 4 2014-04-01 2014-04-01 false Other formula expenses (add-ons... formula expenses (add-ons). In addition to calculating operating subsidy based on the PEL and UEL, a PHA's eligible formula expenses shall be increased by add-ons. The allowed add-ons are: (a) Self-sufficiency....
24 CFR 990.190 - Other formula expenses (add-ons).
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Other formula expenses (add-ons... formula expenses (add-ons). In addition to calculating operating subsidy based on the PEL and UEL, a PHA's eligible formula expenses shall be increased by add-ons. The allowed add-ons are: (a) Self-sufficiency....
40 CFR 75.34 - Units with add-on emission controls.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Units with add-on emission controls... add-on emission controls. (a) The owner or operator of an affected unit equipped with add-on SO2 and... which the add-on emission controls are documented to be operating properly, as described in the...
Linear Covariance Analysis and Epoch State Estimators
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Carpenter, J. Russell
2012-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
Cosmology of a covariant Galilean field.
De Felice, Antonio; Tsujikawa, Shinji
2010-09-10
We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.
Minimal covariant observables identifying all pure states
NASA Astrophysics Data System (ADS)
Carmeli, Claudio; Heinosaari, Teiko; Toigo, Alessandro
2013-09-01
It has been recently shown by Heinosaari, Mazzarella and Wolf (2013) [1] that an observable that identifies all pure states of a d-dimensional quantum system has minimally 4d-4 outcomes or slightly less (the exact number depending on d). However, no simple construction of this type of minimal observable is known. We investigate covariant observables that identify all pure states and have minimal number of outcomes. It is shown that the existence of this kind of observables depends on the dimension of the Hilbert space.
Linear Covariance Analysis and Epoch State Estimators
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Carpenter, J. Russell
2014-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
Covariant harmonic oscillators and coupled harmonic oscillators
NASA Technical Reports Server (NTRS)
Han, Daesoo; Kim, Young S.; Noz, Marilyn E.
1995-01-01
It is shown that the system of two coupled harmonic oscillators shares the basic symmetry properties with the covariant harmonic oscillator formalism which provides a concise description of the basic features of relativistic hadronic features observed in high-energy laboratories. It is shown also that the coupled oscillator system has the SL(4,r) symmetry in classical mechanics, while the present formulation of quantum mechanics can accommodate only the Sp(4,r) portion of the SL(4,r) symmetry. The possible role of the SL(4,r) symmetry in quantum mechanics is discussed.
Covariant change of signature in classical relativity
NASA Astrophysics Data System (ADS)
Ellis, G. F. R.
1992-10-01
This paper gives a covariant formalism enabling investigation of the possibility of change of signature in classical General Relativity, when the geometry is that of a Robertson-Walker universe. It is shown that such changes are compatible with the Einstein field equations, both in the case of a barotropic fluid and of a scalar field. A criterion is given for when such a change of signature should take place in the scalar field case. Some examples show the kind of resulting exact solutions of the field equations.
Spatiotemporal noise covariance estimation from limited empirical magnetoencephalographic data.
Jun, Sung C; Plis, Sergey M; Ranken, Doug M; Schmidt, David M
2006-11-07
The performance of parametric magnetoencephalography (MEG) and electroencephalography (EEG) source localization approaches can be degraded by the use of poor background noise covariance estimates. In general, estimation of the noise covariance for spatiotemporal analysis is difficult mainly due to the limited noise information available. Furthermore, its estimation requires a large amount of storage and a one-time but very large (and sometimes intractable) calculation or its inverse. To overcome these difficulties, noise covariance models consisting of one pair or a sum of multi-pairs of Kronecker products of spatial covariance and temporal covariance have been proposed. However, these approaches cannot be applied when the noise information is very limited, i.e., the amount of noise information is less than the degrees of freedom of the noise covariance models. A common example of this is when only averaged noise data are available for a limited prestimulus region (typically at most a few hundred milliseconds duration). For such cases, a diagonal spatiotemporal noise covariance model consisting of sensor variances with no spatial or temporal correlation has been the common choice for spatiotemporal analysis. In this work, we propose a different noise covariance model which consists of diagonal spatial noise covariance and Toeplitz temporal noise covariance. It can easily be estimated from limited noise information, and no time-consuming optimization and data-processing are required. Thus, it can be used as an alternative choice when one-pair or multi-pair noise covariance models cannot be estimated due to lack of noise information. To verify its capability we used Bayesian inference dipole analysis and a number of simulated and empirical datasets. We compared this covariance model with other existing covariance models such as conventional diagonal covariance, one-pair and multi-pair noise covariance models, when noise information is sufficient to estimate them. We
Pair-Wise Trajectory Management-Oceanic (PTM-O) . [Concept of Operations—Version 3.9
NASA Technical Reports Server (NTRS)
Jones, Kenneth M.
2014-01-01
This document describes the Pair-wise Trajectory Management-Oceanic (PTM-O) Concept of Operations (ConOps). Pair-wise Trajectory Management (PTM) is a concept that includes airborne and ground-based capabilities designed to enable and to benefit from, airborne pair-wise distance-monitoring capability. PTM includes the capabilities needed for the controller to issue a PTM clearance that resolves a conflict for a specific pair of aircraft. PTM avionics include the capabilities needed for the flight crew to manage their trajectory relative to specific designated aircraft. Pair-wise Trajectory Management PTM-Oceanic (PTM-O) is a regional specific application of the PTM concept. PTM is sponsored by the National Aeronautics and Space Administration (NASA) Concept and Technology Development Project (part of NASA's Airspace Systems Program). The goal of PTM is to use enhanced and distributed communications and surveillance along with airborne tools to permit reduced separation standards for given aircraft pairs, thereby increasing the capacity and efficiency of aircraft operations at a given altitude or volume of airspace.
NASA Astrophysics Data System (ADS)
Burgon, R. P., Jr.; Sargent, S.; Zha, T.; Jia, X.
2015-12-01
Closed-path eddy covariance systems measure the flux of greenhouse gasses such as carbon dioxide, water vapor, and nitrous oxide. The challenge is to make accurate field measurements at sites around the world, even in extreme environmental conditions. Sites with dirty air present a particular challenge. Gas concentration measurements may be degraded as dust or debris is deposited on the optical windows in the sample cell. The traditional solution has been to add an in-line filter upstream of the sample cell to keep the windows clean. However, these filters clog over time and must be changed periodically. An in-line filter also acts as a mixing volume and in some cases limits the frequency response of the analyzer. A novel eddy-covariance system that includes a vortex air cleaner at the inlet has been developed and field tested. This new system eliminates the need for a traditional in-line filter to keep the sample cell windows clean. The new system reduces system maintenance and down time. Eddy covariance systems with the vortex intake were tested at several sites ranging from sites with extremely dirty urban air to sites with relatively clean mountain air, and in agricultural areas. These flux systems were monitoring either CO2 and H2O, or N2O. Results show that the closed-path eddy covariance systems with a vortex intake perform very well and require lower maintenance compared to similar systems with in-line filters.
Can the default-mode network be described with one spatial-covariance network?
Habeck, Christian; Steffener, Jason; Rakitin, Brian; Stern, Yaakov
2012-08-15
The default-mode network (DMN) has become a well accepted concept in cognitive and clinical neuroscience over the last decade, and perusal of the recent literature attests to a stimulating research field of cognitive and diagnostic applications (for example, (Andrews-Hanna et al., 2010; Koch et al., 2010; Sheline et al., 2009a; Sheline et al., 2009b; Uddin et al., 2008; Uddin et al., 2009; Weng et al., 2009; Yan et al., 2009)). However, a formal definition of what exactly constitutes a functional brain network is difficult to come by. In recent contributions, some researchers argue that the DMN is best understood as multiple interacting subsystems (Buckner et al., 2008) and have explored modular components of the DMN that have different functional specialization and could to some extent be identified separately (Fox et al., 2005; Uddin et al., 2009). Such conception of modularity seems to imply an opposite construct of a 'unified whole', but it is difficult to locate proponents of the idea of a DMN who are supplying constraints that can be brought to bear on data in rigorous tests. Our aim in this paper is to present a principled way of deriving a single covariance pattern as the neural substrate of the DMN, test to what extent its behavior tracks the coupling strength between critical seed regions, and investigate to what extent our stricter concept of a network is consistent with the already established findings about the DMN in the literature. We show that our approach leads to a functional covariance pattern whose pattern scores are a good proxy for the integrity of the connections between a medioprefrontal, posterior cingulate and parietal seed regions. Our derived DMN network thus has potential for diagnostic applications that are simpler to perform than computation of pairwise correlational strengths or seed maps.
Noisy covariance matrices and portfolio optimization
NASA Astrophysics Data System (ADS)
Pafka, S.; Kondor, I.
2002-05-01
According to recent findings [#!bouchaud!#,#!stanley!#], empirical covariance matrices deduced from financial return series contain such a high amount of noise that, apart from a few large eigenvalues and the corresponding eigenvectors, their structure can essentially be regarded as random. In [#!bouchaud!#], e.g., it is reported that about 94% of the spectrum of these matrices can be fitted by that of a random matrix drawn from an appropriately chosen ensemble. In view of the fundamental role of covariance matrices in the theory of portfolio optimization as well as in industry-wide risk management practices, we analyze the possible implications of this effect. Simulation experiments with matrices having a structure such as described in [#!bouchaud!#,#!stanley!#] lead us to the conclusion that in the context of the classical portfolio problem (minimizing the portfolio variance under linear constraints) noise has relatively little effect. To leading order the solutions are determined by the stable, large eigenvalues, and the displacement of the solution (measured in variance) due to noise is rather small: depending on the size of the portfolio and on the length of the time series, it is of the order of 5 to 15%. The picture is completely different, however, if we attempt to minimize the variance under non-linear constraints, like those that arise e.g. in the problem of margin accounts or in international capital adequacy regulation. In these problems the presence of noise leads to a serious instability and a high degree of degeneracy of the solutions.
Covariant perturbations in a multifluid cosmological medium
NASA Astrophysics Data System (ADS)
Dunsby, Peter K. S.; Bruni, Marco; Ellis, George F. R.
1992-08-01
In a series of recent papers, a new covariant formalism was introduced to treat inhomogeneities in any spacetime. The variables introduced in these papers are gauge-invariant with respect to a Robertson-Walker background spacetime because they vanish identically in such models, and they have a transparent physical meaning. Exact evolution equations were found for these variables, and the linearized form of these equations were obtained, showing that they give the standard results for a barotropic perfect fluid. In this paper we extend this formalism to the general case of multicomponent fluid sources with interactions between them. We show, using the tilted formalism of King and Ellis, (1973) that choosing either the energy frame or the particle frame gives rise to a set of physically well-defined covariant and gauge-invariant variables which describe density and velocity perturbations, both for the total fluid and its constituent components. We then derive a complete set of equations for these variables and show, through harmonic analysis, that they are equivalent to those of Bardeen (1980) and of Kodama and Sasaki (1984). We discuss a number of interesting applications, including the case where the universe is filled with a mixture of baryons and radiation, coupled through Thomson scattering, and we derive solutions for the density and velocity perturbations in the large-scale limit. We also correct a number of errors in the previous literature.
Modeling Covariance Matrices via Partial Autocorrelations
Daniels, M.J.; Pourahmadi, M.
2009-01-01
Summary We study the role of partial autocorrelations in the reparameterization and parsimonious modeling of a covariance matrix. The work is motivated by and tries to mimic the phenomenal success of the partial autocorrelations function (PACF) in model formulation, removing the positive-definiteness constraint on the autocorrelation function of a stationary time series and in reparameterizing the stationarity-invertibility domain of ARMA models. It turns out that once an order is fixed among the variables of a general random vector, then the above properties continue to hold and follows from establishing a one-to-one correspondence between a correlation matrix and its associated matrix of partial autocorrelations. Connections between the latter and the parameters of the modified Cholesky decomposition of a covariance matrix are discussed. Graphical tools similar to partial correlograms for model formulation and various priors based on the partial autocorrelations are proposed. We develop frequentist/Bayesian procedures for modelling correlation matrices, illustrate them using a real dataset, and explore their properties via simulations. PMID:20161018
A Class of Population Covariance Matrices in the Bootstrap Approach to Covariance Structure Analysis
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Hayashi, Kentaro; Yanagihara, Hirokazu
2007-01-01
Model evaluation in covariance structure analysis is critical before the results can be trusted. Due to finite sample sizes and unknown distributions of real data, existing conclusions regarding a particular statistic may not be applicable in practice. The bootstrap procedure automatically takes care of the unknown distribution and, for a given…
Gaskins, J T; Daniels, M J
2016-01-02
The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.
Evaluating covariance in prognostic and system health management applications
NASA Astrophysics Data System (ADS)
Menon, Sandeep; Jin, Xiaohang; Chow, Tommy W. S.; Pecht, Michael
2015-06-01
Developing a diagnostic and prognostic health management system involves analyzing system parameters monitored during the lifetime of the system. This data analysis may involve multiple steps, including data reduction, feature extraction, clustering and classification, building control charts, identification of anomalies, and modeling and predicting parameter degradation in order to evaluate the state of health for the system under investigation. Evaluating the covariance between the monitored system parameters allows for better understanding of the trends in monitored system data, and therefore it is an integral part of the data analysis. Typically, a sample covariance matrix is used to evaluate the covariance between monitored system parameters. The monitored system data are often sensor data, which are inherently noisy. The noise in sensor data can lead to inaccurate evaluation of the covariance in data using a sample covariance matrix. This paper examines approaches to evaluate covariance, including the minimum volume ellipsoid, the minimum covariance determinant, and the nearest neighbor variance estimation. When the performance of these approaches was evaluated on datasets with increasing percentage of Gaussian noise, it was observed that the nearest neighbor variance estimation exhibited the most stable estimates of covariance. To improve the accuracy of covariance estimates using nearest neighbor-based methodology, a modified approach for the nearest neighbor variance estimation technique is developed in this paper. Case studies based on data analysis steps involved in prognostic solutions are developed in order to compare the performance of the covariance estimation methodologies discussed in the paper.
Impact of the 235U Covariance Data in Benchmark Calculations
Leal, Luiz C; Mueller, Don; Arbanas, Goran; Wiarda, Dorothea; Derrien, Herve
2008-01-01
The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235U. The resulting 235U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235U covariance data in calculations of critical benchmark systems.
The Impact of Covariate Measurement Error on Risk Prediction
Khudyakov, Polyna; Gorfine, Malka; Zucker, David; Spiegelman, Donna
2015-01-01
In the development of risk prediction models, predictors are often measured with error. In this paper, we investigate the impact of covariate measurement error on risk prediction. We compare the prediction performance using a costly variable measured without error, along with error-free covariates, to that of a model based on an inexpensive surrogate along with the error-free covariates. We consider continuous error-prone covariates with homoscedastic and heteroscedastic errors, and also a discrete misclassified covariate. Prediction performance is evaluated by the area under the receiver operating characteristic curve (AUC), the Brier score (BS), and the ratio of the observed to the expected number of events (calibration). In an extensive numerical study, we show that (i) the prediction model with the error-prone covariate is very well calibrated, even when it is mis-specified; (ii) using the error-prone covariate instead of the true covariate can reduce the AUC and increase the BS dramatically; (iii) adding an auxiliary variable, which is correlated with the error-prone covariate but conditionally independent of the outcome given all covariates in the true model, can improve the AUC and BS substantially. We conclude that reducing measurement error in covariates will improve the ensuing risk prediction, unless the association between the error-free and error-prone covariates is very high. Finally, we demonstrate how a validation study can be used to assess the effect of mismeasured covariates on risk prediction. These concepts are illustrated in a breast cancer risk prediction model developed in the Nurses’ Health Study. PMID:25865315
Pairwise selection assembly for sequence-independent construction of long-length DNA.
Blake, William J; Chapman, Brad A; Zindal, Anuradha; Lee, Michael E; Lippow, Shaun M; Baynes, Brian M
2010-05-01
The engineering of biological components has been facilitated by de novo synthesis of gene-length DNA. Biological engineering at the level of pathways and genomes, however, requires a scalable and cost-effective assembly of DNA molecules that are longer than approximately 10 kb, and this remains a challenge. Here we present the development of pairwise selection assembly (PSA), a process that involves hierarchical construction of long-length DNA through the use of a standard set of components and operations. In PSA, activation tags at the termini of assembly sub-fragments are reused throughout the assembly process to activate vector-encoded selectable markers. Marker activation enables stringent selection for a correctly assembled product in vivo, often obviating the need for clonal isolation. Importantly, construction via PSA is sequence-independent, and does not require primary sequence modification (e.g. the addition or removal of restriction sites). The utility of PSA is demonstrated in the construction of a completely synthetic 91-kb chromosome arm from Saccharomyces cerevisiae.
Mirabello, Claudio; Adelfio, Alessandro; Pollastri, Gianluca
2014-01-01
Predicting the fold of a protein from its amino acid sequence is one of the grand problems in computational biology. While there has been progress towards a solution, especially when a protein can be modelled based on one or more known structures (templates), in the absence of templates, even the best predictions are generally much less reliable. In this paper, we present an approach for predicting the three-dimensional structure of a protein from the sequence alone, when templates of known structure are not available. This approach relies on a simple reconstruction procedure guided by a novel knowledge-based evaluation function implemented as a class of artificial neural networks that we have designed: Neural Network Pairwise Interaction Fields (NNPIF). This evaluation function takes into account the contextual information for each residue and is trained to identify native-like conformations from non-native-like ones by using large sets of decoys as a training set. The training set is generated and then iteratively expanded during successive folding simulations. As NNPIF are fast at evaluating conformations, thousands of models can be processed in a short amount of time, and clustering techniques can be adopted for model selection. Although the results we present here are very preliminary, we consider them to be promising, with predictions being generated at state-of-the-art levels in some of the cases. PMID:24970210
Yu, Elaine; Monaco, James P; Tomaszewski, John; Shih, Natalie; Feldman, Michael; Madabhushi, Anant
2011-01-01
In this paper we present a system for detecting regions of carcinoma of the prostate (CaP) in H&E stained radical prostatectomy specimens using the color fractal dimension. Color textural information is known to be a valuable characteristic to distinguish CaP from benign tissue. In addition to color information, we know that cancer tends to form contiguous regions. Our system leverages the color staining information of histology as well as spatial dependencies. The color and textural information is first captured using color fractal dimension. To incorporate spatial dependencies, we combine the probability map constructed via color fractal dimension with a novel Markov prior called the Probabilistic Pairwise Markov Model (PPMM). To demonstrate the capability of this CaP detection system, we applied the algorithm to 27 radical prostatectomy specimens from 10 patients. A per pixel evaluation was conducted with ground truth provided by an expert pathologist using only the color fractal feature first, yielding an area under the receiver operator characteristic curve (AUC) curve of 0.790. In conjunction with a Markov prior, the resultant color fractal dimension + Markov random field (MRF) classifier yielded an AUC of 0.831.
Pairwise alignment of interaction networks by fast identification of maximal conserved patterns.
Tian, Wenhong; Samatova, Nagiza F
2009-01-01
A number of tools for the alignment of protein-protein interaction (PPI) networks have laid the foundation for PPI network analysis. They typically find conserved interaction patterns by various local or global search algorithms, and then validate the results using genome annotation. The improvement of the speed, scalability and accuracy of network alignment is still the target of ongoing research. In view of this, we introduce a connected-components based algorithm, called HopeMap for pairwise network alignment with the focus on fast identification of maximal conserved patterns across species. Observing that the number of true homologs across species is relatively small compared to the total number of proteins in all species, we start with highly homologous groups across species, find maximal conserved interaction patterns globally with a generic scoring system, and validate the results across multiple known functional annotations. The results are evaluated in terms of statistical enrichment of gene ontology (GO) terms and KEGG ortholog groups (KO) within conserved interaction patters. HopeMap is fast, with linear computational cost, accurate in terms of KO groups and GO terms specificity and sensitivity, and extensible to multiple network alignment.
Benefits of Using Pairwise Trajectory Management in the Central East Pacific
NASA Technical Reports Server (NTRS)
Chartrand, Ryan; Ballard, Kathryn
2016-01-01
Pairwise Trajectory Management (PTM) is a concept that utilizes airborne and ground-based capabilities to enable airborne spacing operations in oceanic regions. The goal of PTM is to use enhanced surveillance, along with airborne tools, to manage the spacing between aircraft. Due to the enhanced airborne surveillance of Automatic Dependent Surveillance-Broadcast (ADS-B) information and reduced communication, the PTM minimum spacing distance will be less than distances currently required of an air traffic controller. Reduced minimum distance will increase the capacity of aircraft operations at a given altitude or volume of airspace, thereby increasing time on desired trajectory and overall flight efficiency. PTM is designed to allow a flight crew to resolve a specific traffic conflict (or conflicts), identified by the air traffic controller, while maintaining the flight crew's desired altitude. The air traffic controller issues a PTM clearance to a flight crew authorized to conduct PTM operations in order to resolve a conflict for the pair (or pairs) of aircraft (i.e., the PTM aircraft and a designated target aircraft). This clearance requires the flight crew of the PTM aircraft to use their ADS-B-enabled onboard equipment to manage their spacing relative to the designated target aircraft to ensure spacing distances that are no closer than the PTM minimum distance. When the air traffic controller determines that PTM is no longer required, the controller issues a clearance to cancel the PTM operation.
The distribution of pairwise genetic distances: a tool for investigating disease transmission.
Worby, Colin J; Chang, Hsiao-Han; Hanage, William P; Lipsitch, Marc
2014-12-01
Whole-genome sequencing of pathogens has recently been used to investigate disease outbreaks and is likely to play a growing role in real-time epidemiological studies. Methods to analyze high-resolution genomic data in this context are still lacking, and inferring transmission dynamics from such data typically requires many assumptions. While recent studies have proposed methods to infer who infected whom based on genetic distance between isolates from different individuals, the link between epidemiological relationship and genetic distance is still not well understood. In this study, we investigated the distribution of pairwise genetic distances between samples taken from infected hosts during an outbreak. We proposed an analytically tractable approximation to this distribution, which provides a framework to evaluate the likelihood of particular transmission routes. Our method accounts for the transmission of a genetically diverse inoculum, a possibility overlooked in most analyses. We demonstrated that our approximation can provide a robust estimation of the posterior probability of transmission routes in an outbreak and may be used to rule out transmission events at a particular probability threshold. We applied our method to data collected during an outbreak of methicillin-resistant Staphylococcus aureus, ruling out several potential transmission links. Our study sheds light on the accumulation of mutations in a pathogen during an epidemic and provides tools to investigate transmission dynamics, avoiding the intensive computation necessary in many existing methods.
An efficient algorithm for pairwise local alignment of protein interaction networks
Chen, Wenbin; Schmidt, Matthew; Tian, Wenhong; ...
2015-04-01
Recently, researchers seeking to understand, modify, and create beneficial traits in organisms have looked for evolutionarily conserved patterns of protein interactions. Their conservation likely means that the proteins of these conserved functional modules are important to the trait's expression. In this paper, we formulate the problem of identifying these conserved patterns as a graph optimization problem, and develop a fast heuristic algorithm for this problem. We compare the performance of our network alignment algorithm to that of the MaWISh algorithm [Koyuturk M, Kim Y, Topkara U, Subramaniam S, Szpankowski W, Grama A, Pairwise alignment of protein interaction networks, J Computmore » Biol 13(2): 182-199, 2006.], which bases its search algorithm on a related decision problem formulation. We find that our algorithm discovers conserved modules with a larger number of proteins in an order of magnitude less time. In conclusion, the protein sets found by our algorithm correspond to known conserved functional modules at comparable precision and recall rates as those produced by the MaWISh algorithm.« less
A new graph-based method for pairwise global network alignment
Klau, Gunnar W
2009-01-01
Background In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library. PMID:19208162
A fast and powerful W-test for pairwise epistasis testing.
Wang, Maggie Haitian; Sun, Rui; Guo, Junfeng; Weng, Haoyi; Lee, Jack; Hu, Inchi; Sham, Pak Chung; Zee, Benny Chung-Ying
2016-07-08
Epistasis plays an essential role in the development of complex diseases. Interaction methods face common challenge of seeking a balance between persistent power, model complexity, computation efficiency, and validity of identified bio-markers. We introduce a novel W-test to identify pairwise epistasis effect, which measures the distributional difference between cases and controls through a combined log odds ratio. The test is model-free, fast, and inherits a Chi-squared distribution with data adaptive degrees of freedom. No permutation is needed to obtain the P-values. Simulation studies demonstrated that the W-test is more powerful in low frequency variants environment than alternative methods, which are the Chi-squared test, logistic regression and multifactor-dimensionality reduction (MDR). In two independent real bipolar disorder genome-wide associations (GWAS) datasets, the W-test identified significant interactions pairs that can be replicated, including SLIT3-CENPN, SLIT3-TMEM132D, CNTNAP2-NDST4 and CNTCAP2-RTN4R The genes in the pairs play central roles in neurotransmission and synapse formation. A majority of the identified loci are undiscoverable by main effect and are low frequency variants. The proposed method offers a powerful alternative tool for mapping the genetic puzzle underlying complex disorders.
NASA Technical Reports Server (NTRS)
Carreno, Victor A.
2002-01-01
The KB3D algorithm is a pairwise conflict detection and resolution (CD&R) algorithm. It detects and generates trajectory vectoring for an aircraft which has been predicted to be in an airspace minima violation within a given look-ahead time. It has been proven, using mechanized theorem proving techniques, that for a pair of aircraft, KB3D produces at least one vectoring solution and that all solutions produced are correct. Although solutions produced by the algorithm are mathematically correct, they might not be physically executable by an aircraft or might not solve multiple aircraft conflicts. This paper describes a simple solution selection method which assesses all solutions generated by KB3D and determines the solution to be executed. The solution selection method and KB3D are evaluated using a simulation in which N aircraft fly in a free-flight environment and each aircraft in the simulation uses KB3D to maintain separation. Specifically, the solution selection method filters KB3D solutions which are procedurally undesirable or physically not executable and uses a predetermined criteria for selection.
A fast and powerful W-test for pairwise epistasis testing
Wang, Maggie Haitian; Sun, Rui; Guo, Junfeng; Weng, Haoyi; Lee, Jack; Hu, Inchi; Sham, Pak Chung; Zee, Benny Chung-Ying
2016-01-01
Epistasis plays an essential role in the development of complex diseases. Interaction methods face common challenge of seeking a balance between persistent power, model complexity, computation efficiency, and validity of identified bio-markers. We introduce a novel W-test to identify pairwise epistasis effect, which measures the distributional difference between cases and controls through a combined log odds ratio. The test is model-free, fast, and inherits a Chi-squared distribution with data adaptive degrees of freedom. No permutation is needed to obtain the P-values. Simulation studies demonstrated that the W-test is more powerful in low frequency variants environment than alternative methods, which are the Chi-squared test, logistic regression and multifactor-dimensionality reduction (MDR). In two independent real bipolar disorder genome-wide associations (GWAS) datasets, the W-test identified significant interactions pairs that can be replicated, including SLIT3-CENPN, SLIT3-TMEM132D, CNTNAP2-NDST4 and CNTCAP2-RTN4R. The genes in the pairs play central roles in neurotransmission and synapse formation. A majority of the identified loci are undiscoverable by main effect and are low frequency variants. The proposed method offers a powerful alternative tool for mapping the genetic puzzle underlying complex disorders. PMID:27112568
An efficient algorithm for pairwise local alignment of protein interaction networks
Chen, Wenbin; Schmidt, Matthew; Tian, Wenhong; Samatova, Nagiza F.; Zhang, Shaohong
2015-04-01
Recently, researchers seeking to understand, modify, and create beneficial traits in organisms have looked for evolutionarily conserved patterns of protein interactions. Their conservation likely means that the proteins of these conserved functional modules are important to the trait's expression. In this paper, we formulate the problem of identifying these conserved patterns as a graph optimization problem, and develop a fast heuristic algorithm for this problem. We compare the performance of our network alignment algorithm to that of the MaWISh algorithm [Koyuturk M, Kim Y, Topkara U, Subramaniam S, Szpankowski W, Grama A, Pairwise alignment of protein interaction networks, J Comput Biol 13(2): 182-199, 2006.], which bases its search algorithm on a related decision problem formulation. We find that our algorithm discovers conserved modules with a larger number of proteins in an order of magnitude less time. In conclusion, the protein sets found by our algorithm correspond to known conserved functional modules at comparable precision and recall rates as those produced by the MaWISh algorithm.
Evaluation of advanced multiplex short tandem repeat systems in pairwise kinship analysis.
Tamura, Tomonori; Osawa, Motoki; Ochiai, Eriko; Suzuki, Takanori; Nakamura, Takashi
2015-09-01
The AmpFLSTR Identifiler Kit, comprising 15 autosomal short tandem repeat (STR) loci, is commonly employed in forensic practice for calculating match probabilities and parentage testing. The conventional system exhibits insufficient estimation for kinship analysis such as sibship testing because of shortness of examined loci. This study evaluated the power of the PowerPlex Fusion System, GlobalFiler Kit, and PowerPlex 21 System, which comprise more than 20 autosomal STR loci, to estimate pairwise blood relatedness (i.e., parent-child, full siblings, second-degree relatives, and first cousins). The genotypes of all 24 STR loci in 10,000 putative pedigrees were constructed by simulation. The likelihood ratio for each locus was calculated from joint probabilities for relatives and non-relatives. The combined likelihood ratio was calculated according to the product rule. The addition of STR loci improved separation between relatives and non-relatives. However, these systems were less effectively extended to the inference for first cousins. In conclusion, these advanced systems will be useful in forensic personal identification, especially in the evaluation of full siblings and second-degree relatives. Moreover, the additional loci may give rise to two major issues of more frequent mutational events and several pairs of linked loci on the same chromosome.
Hydrostatic pressure effect on hydrophobic hydration and pairwise hydrophobic interaction of methane
NASA Astrophysics Data System (ADS)
Graziano, Giuseppe
2014-03-01
At room temperature, the Ben-Naim standard hydration Gibbs energy of methane is a positive quantity that increases markedly with hydrostatic pressure [M. S. Moghaddam and H. S. Chan, J. Chem. Phys. 126, 114507 (2007)]. This finding is rationalized by showing that the magnitude of the reversible work to create a suitable cavity in water increases with pressure due to both the increase in the volume packing density of water and the contribution of the pressure-volume work. According to the present approach, at room temperature, the Gibbs energy of the contact-minimum configuration of two methane molecules is a negative quantity that increases in magnitude with hydrostatic pressure. This result is not in line with the results of several computer simulation studies [T. Ghosh, A. E. Garcia, and S. Garde, J. Am. Chem. Soc. 123, 10997-11003 (2001)], and emerges because pairwise association causes a decrease in solvent-excluded volume that produces a gain of configurational/translational entropy of water molecules, whose magnitude increases with the volume packing density of the liquid phase.
Net2Align: An Algorithm For Pairwise Global Alignment of Biological Networks
Wadhwab, Gulshan; Upadhyayaa, K. C.
2016-01-01
The amount of data on molecular interactions is growing at an enormous pace, whereas the progress of methods for analysing this data is still lacking behind. Particularly, in the area of comparative analysis of biological networks, where one wishes to explore the similarity between two biological networks, this holds a potential problem. In consideration that the functionality primarily runs at the network level, it advocates the need for robust comparison methods. In this paper, we describe Net2Align, an algorithm for pairwise global alignment that can perform node-to-node correspondences as well as edge-to-edge correspondences into consideration. The uniqueness of our algorithm is in the fact that it is also able to detect the type of interaction, which is essential in case of directed graphs. The existing algorithm is only able to identify the common nodes but not the common edges. Another striking feature of the algorithm is that it is able to remove duplicate entries in case of variable datasets being aligned. This is achieved through creation of a local database which helps exclude duplicate links. In a pervasive computational study on gene regulatory network, we establish that our algorithm surpasses its counterparts in its results. Net2Align has been implemented in Java 7 and the source code is available as supplementary files. PMID:28356678
Mavridis, Dimitris; White, Ian R; Higgins, Julian P T; Cipriani, Andrea; Salanti, Georgia
2015-02-28
Missing outcome data are commonly encountered in randomized controlled trials and hence may need to be addressed in a meta-analysis of multiple trials. A common and simple approach to deal with missing data is to restrict analysis to individuals for whom the outcome was obtained (complete case analysis). However, estimated treatment effects from complete case analyses are potentially biased if informative missing data are ignored. We develop methods for estimating meta-analytic summary treatment effects for continuous outcomes in the presence of missing data for some of the individuals within the trials. We build on a method previously developed for binary outcomes, which quantifies the degree of departure from a missing at random assumption via the informative missingness odds ratio. Our new model quantifies the degree of departure from missing at random using either an informative missingness difference of means or an informative missingness ratio of means, both of which relate the mean value of the missing outcome data to that of the observed data. We propose estimating the treatment effects, adjusted for informative missingness, and their standard errors by a Taylor series approximation and by a Monte Carlo method. We apply the methodology to examples of both pairwise and network meta-analysis with multi-arm trials.
Add/Compare/Select Circuit For Rapid Decoding
NASA Technical Reports Server (NTRS)
Budinger, James M.; Becker, Neal D.; Johnson, Peter N.
1993-01-01
Prototype decoding system operates at 200 Mb/s. ACS (add/compare/select) gate array is highly integrated emitter-coupled-logic circuit implementing arithmetic operations essential to Viterbi decoding of convolutionally encoded data signals. Principal advantage of circuit is speed. Operates as single unit performing eight additions and finds minimum of eight sums, or operates as two independent units, each performing four additions and finding minimum of four sums. Flexibility enables application to variety of different codes. Includes built-in self-testing circuitry, enabling unit to be tested at full speed with help of only simple test fixture.
Astronomical imaging by filtered weighted-shift-and-add technique
NASA Technical Reports Server (NTRS)
Ribak, Erez
1986-01-01
The weighted-shift-and-add speckle imaging technique is analyzed using simple assumptions. The end product is shown to be a convolution of the object with a typical point-spread function (psf) that is similar in shape to the telescope psf and depends marginally on the speckle psf. A filter can be applied to each data frame before locating the maxima, either to identify the speckle locations (matched filter) or to estimate the instantaneous atmospheric psf (Wiener filter). Preliminary results show the power of the technique when applied to photon-limited data and to extended objects.
Image restoration by the shift-and-add algorithm.
Bagnuolo, W G
1985-05-01
A new method for image restoration based on the shift-and-add (SAA) algorithm is presented, the main advantages of which appear to be speed and simplicity. The SAA pattern produced by an object is given by the object correlated by a nonlinear replica of itself whose intensity distribution is strongly weighted toward the brighter pixels. A method of successive substitutions analogous to Fienup's algorithm can then be used to decorrelate the SAA pattern and recover the object. The method is applied to the case of the extended chromosphere of Betelgeuse.
Identifying sources of uncertainty using covariance analysis
NASA Astrophysics Data System (ADS)
Hyslop, N. P.; White, W. H.
2010-12-01
Atmospheric aerosol monitoring often includes performing multiple analyses on a collected sample. Some common analyses resolve suites of elements or compounds (e.g., spectrometry, chromatography). Concentrations are determined through multi-step processes involving sample collection, physical or chemical analysis, and data reduction. Uncertainties in the individual steps propagate into uncertainty in the calculated concentration. The assumption in most treatments of measurement uncertainty is that errors in the various species concentrations measured in a sample are random and therefore independent of each other. This assumption is often not valid in speciated aerosol data because some errors can be common to multiple species. For example, an error in the sample volume will introduce a common error into all species concentrations determined in the sample, and these errors will correlate with each other. Measurement programs often use paired (collocated) measurements to characterize the random uncertainty in their measurements. Suites of paired measurements provide an opportunity to go beyond the characterization of measurement uncertainties in individual species to examine correlations amongst the measurement uncertainties in multiple species. This additional information can be exploited to distinguish sources of uncertainty that affect all species from those that only affect certain subsets or individual species. Data from the Interagency Monitoring of Protected Visual Environments (IMPROVE) program are used to illustrate these ideas. Nine analytes commonly detected in the IMPROVE network were selected for this analysis. The errors in these analytes can be reasonably modeled as multiplicative, and the natural log of the ratio of concentrations measured on the two samplers provides an approximation of the error. Figure 1 shows the covariation of these log ratios among the different analytes for one site. Covariance is strongest amongst the dust element (Fe, Ca, and
Noisy covariance matrices and portfolio optimization II
NASA Astrophysics Data System (ADS)
Pafka, Szilárd; Kondor, Imre
2003-03-01
Recent studies inspired by results from random matrix theory (Galluccio et al.: Physica A 259 (1998) 449; Laloux et al.: Phys. Rev. Lett. 83 (1999) 1467; Risk 12 (3) (1999) 69; Plerou et al.: Phys. Rev. Lett. 83 (1999) 1471) found that covariance matrices determined from empirical financial time series appear to contain such a high amount of noise that their structure can essentially be regarded as random. This seems, however, to be in contradiction with the fundamental role played by covariance matrices in finance, which constitute the pillars of modern investment theory and have also gained industry-wide applications in risk management. Our paper is an attempt to resolve this embarrassing paradox. The key observation is that the effect of noise strongly depends on the ratio r= n/ T, where n is the size of the portfolio and T the length of the available time series. On the basis of numerical experiments and analytic results for some toy portfolio models we show that for relatively large values of r (e.g. 0.6) noise does, indeed, have the pronounced effect suggested by Galluccio et al. (1998), Laloux et al. (1999) and Plerou et al. (1999) and illustrated later by Laloux et al. (Int. J. Theor. Appl. Finance 3 (2000) 391), Plerou et al. (Phys. Rev. E, e-print cond-mat/0108023) and Rosenow et al. (Europhys. Lett., e-print cond-mat/0111537) in a portfolio optimization context, while for smaller r (around 0.2 or below), the error due to noise drops to acceptable levels. Since the length of available time series is for obvious reasons limited in any practical application, any bound imposed on the noise-induced error translates into a bound on the size of the portfolio. In a related set of experiments we find that the effect of noise depends also on whether the problem arises in asset allocation or in a risk measurement context: if covariance matrices are used simply for measuring the risk of portfolios with a fixed composition rather than as inputs to optimization, the
Geometric derivation of the microscopic stress: A covariant central force decomposition
NASA Astrophysics Data System (ADS)
Torres-Sánchez, Alejandro; Vanegas, Juan M.; Arroyo, Marino
2016-08-01
We revisit the derivation of the microscopic stress, linking the statistical mechanics of particle systems and continuum mechanics. The starting point in our geometric derivation is the Doyle-Ericksen formula, which states that the Cauchy stress tensor is the derivative of the free-energy with respect to the ambient metric tensor and which follows from a covariance argument. Thus, our approach to define the microscopic stress tensor does not rely on the statement of balance of linear momentum as in the classical Irving-Kirkwood-Noll approach. Nevertheless, the resulting stress tensor satisfies balance of linear and angular momentum. Furthermore, our approach removes the ambiguity in the definition of the microscopic stress in the presence of multibody interactions by naturally suggesting a canonical and physically motivated force decomposition into pairwise terms, a key ingredient in this theory. As a result, our approach provides objective expressions to compute a microscopic stress for a system in equilibrium and for force-fields expanded into multibody interactions of arbitrarily high order. We illustrate the proposed methodology with molecular dynamics simulations of a fibrous protein using a force-field involving up to 5-body interactions.
AFCI-2.0 Library of Neutron Cross Section Covariances
Herman, M.; Herman,M.; Oblozinsky,P.; Mattoon,C.; Pigni,M.; Hoblit,S.; Mughabghab,S.F.; Sonzogni,A.; Talou,P.; Chadwick,M.B.; Hale.G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G.
2011-06-26
Neutron cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The primary purpose of the library is to provide covariances for the Advanced Fuel Cycle Initiative (AFCI) data adjustment project, which is focusing on the needs of fast advanced burner reactors. The covariances refer to central values given in the 2006 release of the U.S. neutron evaluated library ENDF/B-VII. The preliminary version (AFCI-2.0beta) has been completed in October 2010 and made available to the users for comments. In the final 2.0 release, covariances for a few materials were updated, in particular new LANL evaluations for {sup 238,240}Pu and {sup 241}Am were adopted. BNL was responsible for covariances for structural materials and fission products, management of the library and coordination of the work, while LANL was in charge of covariances for light nuclei and for actinides.
Spatially covariant theories of a transverse, traceless graviton: Formalism
NASA Astrophysics Data System (ADS)
Khoury, Justin; Miller, Godfrey E. J.; Tolley, Andrew J.
2012-04-01
General relativity is a generally covariant, locally Lorentz covariant theory of two transverse, traceless graviton degrees of freedom. According to a theorem of Hojman, Kuchař, and Teitelboim, modifications of general relativity must either introduce new degrees of freedom or violate the principle of local Lorentz covariance. In this paper, we explore modifications of general relativity that retain the same graviton degrees of freedom, and therefore explicitly break Lorentz covariance. Motivated by cosmology, the modifications of interest maintain explicit spatial covariance. In spatially covariant theories of the graviton, the physical Hamiltonian density obeys an analogue of the renormalization group equation which encodes invariance under flow through the space of conformally equivalent spatial metrics. This paper is dedicated to setting up the formalism of our approach and applying it to a realistic class of theories. Forthcoming work will apply the formalism more generally.
Evaluation of the Covariance Matrix of Estimated Resonance Parameters
NASA Astrophysics Data System (ADS)
Becker, B.; Capote, R.; Kopecky, S.; Massimi, C.; Schillebeeckx, P.; Sirakov, I.; Volev, K.
2014-04-01
In the resonance region nuclear resonance parameters are mostly obtained by a least square adjustment of a model to experimental data. Derived parameters can be mutually correlated through the adjustment procedure as well as through common experimental or model uncertainties. In this contribution we investigate four different methods to propagate the additional covariance caused by experimental or model uncertainties into the evaluation of the covariance matrix of the estimated parameters: (1) including the additional covariance into the experimental covariance matrix based on calculated or theoretical estimates of the data; (2) including the uncertainty affected parameter in the adjustment procedure; (3) evaluation of the full covariance matrix by Monte Carlo sampling of the common parameter; and (4) retroactively including the additional covariance by using the marginalization procedure of Habert et al.
Power series evaluation of transition and covariance matrices.
NASA Technical Reports Server (NTRS)
Bierman, G. J.
1972-01-01
Reexamination power series solutions to the matrix covariance differential equation and the transition differential equation. Truncation error bounds are derived which are computationally attractive and which extend previous results. Polynomial approximations are obtained by exploiting the functional equations satisfied by the transition and covariance matrices. The series-functional equation propagation technique represents a fast and accurate alternative to the numerical integration of the time-invariant transition and covariance equations.
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
EMPIRE ULTIMATE EXPANSION: RESONANCES AND COVARIANCES.
HERMAN,M.; MUGHABGHAB, S.F.; OBLOZINSKY, P.; ROCHMAN, D.; PIGNI, M.T.; KAWANO, T.; CAPOTE, R.; ZERKIN, V.; TRKOV, A.; SIN, M.; CARSON, B.V.; WIENKE, H. CHO, Y.-S.
2007-04-22
The EMPIRE code system is being extended to cover the resolved and unresolved resonance region employing proven methodology used for the production of new evaluations in the recent Atlas of Neutron Resonances. Another directions of Empire expansion are uncertainties and correlations among them. These include covariances for cross sections as well as for model parameters. In this presentation we concentrate on the KALMAN method that has been applied in EMPIRE to the fast neutron range as well as to the resonance region. We also summarize role of the EMPIRE code in the ENDF/B-VII.0 development. Finally, large scale calculations and their impact on nuclear model parameters are discussed along with the exciting perspectives offered by the parallel supercomputing.
Covariance of lucky images: performance analysis
NASA Astrophysics Data System (ADS)
Cagigal, Manuel P.; Valle, Pedro J.; Cagigas, Miguel A.; Villó-Pérez, Isidro; Colodro-Conde, Carlos; Ginski, C.; Mugrauer, M.; Seeliger, M.
2017-01-01
The covariance of ground-based lucky images is a robust and easy-to-use algorithm that allows us to detect faint companions surrounding a host star. In this paper, we analyse the relevance of the number of processed frames, the frames' quality, the atmosphere conditions and the detection noise on the companion detectability. This analysis has been carried out using both experimental and computer-simulated imaging data. Although the technique allows us the detection of faint companions, the camera detection noise and the use of a limited number of frames reduce the minimum detectable companion intensity to around 1000 times fainter than that of the host star when placed at an angular distance corresponding to the few first Airy rings. The reachable contrast could be even larger when detecting companions with the assistance of an adaptive optics system.
Conformal killing tensors and covariant Hamiltonian dynamics
Cariglia, M.; Gibbons, G. W.; Holten, J.-W. van; Horvathy, P. A.; Zhang, P.-M.
2014-12-15
A covariant algorithm for deriving the conserved quantities for natural Hamiltonian systems is combined with the non-relativistic framework of Eisenhart, and of Duval, in which the classical trajectories arise as geodesics in a higher dimensional space-time, realized by Brinkmann manifolds. Conserved quantities which are polynomial in the momenta can be built using time-dependent conformal Killing tensors with flux. The latter are associated with terms proportional to the Hamiltonian in the lower dimensional theory and with spectrum generating algebras for higher dimensional quantities of order 1 and 2 in the momenta. Illustrations of the general theory include the Runge-Lenz vector for planetary motion with a time-dependent gravitational constant G(t), motion in a time-dependent electromagnetic field of a certain form, quantum dots, the Hénon-Heiles and Holt systems, respectively, providing us with Killing tensors of rank that ranges from one to six.
Covariant generalization of cosmological perturbation theory
Enqvist, Kari; Hoegdahl, Janne; Nurmi, Sami; Vernizzi, Filippo
2007-01-15
We present an approach to cosmological perturbations based on a covariant perturbative expansion between two worldlines in the real inhomogeneous universe. As an application, at an arbitrary order we define an exact scalar quantity which describes the inhomogeneities in the number of e-folds on uniform density hypersurfaces and which is conserved on all scales for a barotropic ideal fluid. We derive a compact form for its conservation equation at all orders and assign it a simple physical interpretation. To make a comparison with the standard perturbation theory, we develop a method to construct gauge-invariant quantities in a coordinate system at arbitrary order, which we apply to derive the form of the nth order perturbation in the number of e-folds on uniform density hypersurfaces and its exact evolution equation. On large scales, this provides the gauge-invariant expression for the curvature perturbation on uniform density hypersurfaces and its evolution equation at any order.
Covariates of Craving in Actively Drinking Alcoholics
Chakravorty, Subhajit; Kuna, Samuel T.; Zaharakis, Nikola; O’Brien, Charles P.; Kampman, Kyle M.; Oslin, David
2010-01-01
The goal of this cross-sectional study was to assess the relationship of alcohol craving with biopsychosocial and addiction factors that are clinically pertinent to alcoholism treatment. Alcohol craving was assessed in 315 treatment-seeking, alcohol dependent subjects using the PACS questionnaire. Standard validated questionnaires were used to evaluate a variety of biological, addiction, psychological, psychiatric, and social factors. Individual covariates of craving included age, race, problematic consequences of drinking, heavy drinking, motivation for change, mood disturbance, sleep problems, and social supports. In a multivariate analysis (R2 = .34), alcohol craving was positively associated with mood disturbance, heavy drinking, readiness for change, and negatively associated with age. The results from this study suggest that alcohol craving is a complex phenomenon influenced by multiple factors. PMID:20716308
Does Chiropractic ‘Add Years to Life’?
Morgan, Lon
2004-01-01
The chiropractic cliché “Chiropractic Adds Life to Years and Years to Life” was examined for validity. It was assumed that chiropractors themselves would be the best informed about the health benefits of chiropractic care. Chiropractors would therefore be most likely to receive some level of chiropractic care, and do so on a long-term basis. If chiropractic care significantly improves general health then chiropractors themselves should demonstrate longer life spans than the general population. Two separate data sources were used to examine chiropractic mortality rates. One source used obituary notices from past issues of Dynamic Chiropractic from 1990 to mid-2003. The second source used biographies from Who Was Who in Chiropractic – A Necrology covering a ten year period from 1969–1979. The two sources yielded a mean age at death for chiropractors of 73.4 and 74.2 years respectively. The mean ages at death of chiropractors is below the national average of 76.9 years and is below their medical doctor counterparts of 81.5. This review of mortality date found no evidence to support the claim that chiropractic care “Adds Years to Life.” PMID:17549121
Performance of internal covariance estimators for cosmic shear correlation functions
Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.
2015-12-31
Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in the $\\Omega_m$-$\\sigma_8$ plane as measured with internally estimated covariance matrices is on average $\\gtrsim 85\\%$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$ derived from internally estimated covariances is $\\sim 90\\%$ of the true uncertainty.
Performance of internal covariance estimators for cosmic shear correlation functions
Friedrich, O.; Seitz, S.; Eifler, T. F.; ...
2015-12-31
Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less
Polymorphisms in the GNB3 and ADD1 genes and blood pressure in a Chinese population.
Chen, Shufeng; Wang, Hongwei; Lu, Xiangfeng; Liu, De-Pei; Chen, Jing; Jaquish, Cashell E; Rao, Dabeeru C; Hixson, James E; Kelly, Tanika N; Hou, Liping; Wang, Laiyuan; Huang, Jianfeng; Chen, Chung-Shiuan; Rice, Treva K; Whelton, Paul K; He, Jiang; Gu, Dongfeng
2010-08-01
A large proportion of the phenotypic variation in blood pressure (BP) appears to be inherited as a polygenic trait. This study examined the association between 12 single nucleotide polymorphisms (SNPs) in the guanine nucleotide binding protein beta polypeptide 3 (GNB3) and adducin 1 alpha (ADD1) genes and systolic (SBP), diastolic (DBP), and mean arterial (MAP) BP. A total of 3,142 individuals from 636 families were recruited from rural north China, and 2,746 met the eligibility criteria for analysis. BP measurements were obtained using a random-zero sphygmomanometer. Genetic variants were determined using SNPlex assays on an automated DNA Sequencer. A mixed linear model was used to estimate the association between each SNP and BP level. After Bonferroni correction, marker rs4963516 of the GNB3 gene remained significantly associated with DBP (corrected P values = 0.006, 0.007 and 0.002 for co-dominant, additive, and recessive models, respectively) and MAP (corrected P values = 0.02, 0.049, and 0.005, respectively). Compared to carriers of the major A allele, CC homozygotes had higher mean DBP (75.81 +/- 0.62 vs. 73.46 +/- 0.25 mmHg, P = 0.0002) and MAP (91.87 +/- 0.68 vs. 89.42 +/- 0.28 mmHg, P = 0.0004) after adjusting for covariates of age, gender, BMI, study site, and room temperature during BP measurement. In summary, these data support a role for the GNB3 gene in BP regulation in the Chinese population. Future studies aimed at replicating these novel findings are warranted.
Zhao, Jiangsan; Bodner, Gernot; Rewald, Boris
2016-01-01
Phenotyping local crop cultivars is becoming more and more important, as they are an important genetic source for breeding - especially in regard to inherent root system architectures. Machine learning algorithms are promising tools to assist in the analysis of complex data sets; novel approaches are need to apply them on root phenotyping data of mature plants. A greenhouse experiment was conducted in large, sand-filled columns to differentiate 16 European Pisum sativum cultivars based on 36 manually derived root traits. Through combining random forest and support vector machine models, machine learning algorithms were successfully used for unbiased identification of most distinguishing root traits and subsequent pairwise cultivar differentiation. Up to 86% of pea cultivar pairs could be distinguished based on top five important root traits (Timp5) - Timp5 differed widely between cultivar pairs. Selecting top important root traits (Timp) provided a significant improved classification compared to using all available traits or randomly selected trait sets. The most frequent Timp of mature pea cultivars was total surface area of lateral roots originating from tap root segments at 0-5 cm depth. The high classification rate implies that culturing did not lead to a major loss of variability in root system architecture in the studied pea cultivars. Our results illustrate the potential of machine learning approaches for unbiased (root) trait selection and cultivar classification based on rather small, complex phenotypic data sets derived from pot experiments. Powerful statistical approaches are essential to make use of the increasing amount of (root) phenotyping information, integrating the complex trait sets describing crop cultivars.
NASA Astrophysics Data System (ADS)
Lan, Hongzhi; Khismatullin, Damir B.
2014-07-01
Leukocytes and other circulating cells deform and move relatively to the channel flow in the lateral and translational directions. Their migratory property is important in immune response, hemostasis, cancer progression, delivery of nutrients, and microfluidic technologies such as cell separation and enrichment, and flow cytometry. Using our three-dimensional computational algorithm for multiphase viscoelastic flow, we have investigated the effect of pairwise interaction on the lateral and translational migration of circulating cells in a microchannel. The numerical simulation data show that when two cells with the same size and small separation distance interact, repulsive interaction take place until they reach the same lateral equilibrium position. During this process, they undergo swapping or passing, depending on the initial separation distance between each other. The threshold value of this distance increases with cell deformation, indicating that the cells experiencing larger deformation are more likely to swap. When a series of closely spaced cells with the same size are considered, they generally undergo damped oscillation in both lateral and translational directions until they reach equilibrium positions where they become evenly distributed in the flow direction (self-assembly phenomenon). A series of cells with a large lateral separation distance could collide repeatedly with each other, eventually crossing the centerline and entering the other side of the channel. For a series of cells with different deformability, more deformable cells, upon impact with less deformable cells, move to an equilibrium position closer to the centerline. The results of our study show that the bulk deformation of circulating cells plays a key role in their migration in a microchannel.
Benchmarking the performance of pairwise homogenization of surface temperatures in the United States
NASA Astrophysics Data System (ADS)
Menne, M. J.; Williams, C. N.; Thorne, P. W.
2013-09-01
Changes in the circumstances behind in situ temperature measurements often lead to shifts in individual station records that can lead to over or under-estimates of the local and regional temperature trends. Since these shifts are comparable in magnitude to climate change signals, homogeneity "corrections" are necessary to make the records suitable for climate analysis. To quantify the effectiveness of surface temperature homogenization in the United States, a randomized perturbed ensemble of the pairwise homogenization algorithm was run against a suite of benchmark analogs to real monthly temperature data from the United States Cooperative Observer Program, which includes the subset of stations known as the United States Historical Climatology Network (USHCN). Results indicate that all randomized versions of the algorithm consistently produce homogenized data closer to the true climate signal in the presence of widespread systematic shifts in the data. When applied to the real-world observations, the randomized ensemble reinforces previous understanding that the two dominant sources of shifts in the U.S. temperature records are caused by changes to time of observation (spurious cooling in minimum and maximum) and conversion to electronic resistance thermometers (spurious cooling in maximum and warming in minimum). Trend bounds defined by the ensemble output indicate that maximum temperature trends are positive for the past 30, 50 and 100 years, and that these maximums contain pervasive negative shifts that cause the unhomogenized (raw) trends to fall below the lowest of the ensemble of homogenized trends. Moreover, because the residual impact of undetected/uncorrected shifts in the homogenized analogs is one-tailed when the imposed shifts have a positive or negative sign preference, it is likely that maximum temperature trends have been underestimated in the real-world homogenized temperature data from the USHCN. Trends for minimum temperature are also positive
Benchmarking the performance of pairwise homogenization of surface temperatures in the United States
NASA Astrophysics Data System (ADS)
Williams, Claude N.; Menne, Matthew J.; Thorne, Peter W.
2012-03-01
Changes in the circumstances behind in situ temperature measurements often lead to biases in individual station records that, collectively, can also bias regional temperature trends. Since these biases are comparable in magnitude to climate change signals, homogeneity "corrections" are necessary to make the records suitable for climate analysis. To quantify the effectiveness of U.S. surface temperature homogenization, a randomized perturbed ensemble of the USHCN pairwise homogenization algorithm was run against a suite of benchmark analogs to real monthly temperature data. Results indicate that all randomized versions of the algorithm consistently produce homogenized data closer to the true climate signal in the presence of widespread systematic errors. When applied to the real-world observations, the randomized ensemble reinforces previous understanding that the two dominant sources of bias in the U.S. temperature records are caused by changes to time of observation (spurious cooling in minimum and maximum) and conversion to electronic resistance thermometers (spurious cooling in maximum and warming in minimum). Error bounds defined by the ensemble output indicate that maximum temperature trends are positive for the past 30, 50 and 100 years, and that these maximums contain pervasive negative biases that cause the unhomogenized (raw) trends to fall below the lower limits of uncertainty. Moreover, because residual bias in the homogenized analogs is one-tailed under biased errors, it is likely that maximum temperature trends have been underestimated in the USHCN. Trends for minimum temperature are also positive over the three periods, but the ensemble error bounds encompass trends from the unhomogenized data.
Galland, Nicolas; Kone, Soleymane; Le Questel, Jean-Yves
2012-10-01
A quantitative analysis of the interaction sites of the anti-Alzheimer drug galanthamine with molecular probes (water and benzene molecules) representative of its surroundings in the binding site of acetylcholinesterase (AChE) has been realized through pairwise potentials calculations and quantum chemistry. This strategy allows a full and accurate exploration of the galanthamine potential energy surface of interaction. Significantly different results are obtained according to the distances of approaches between the various molecular fragments and the conformation of the galanthamine N-methyl substituent. The geometry of the most relevant complexes has then been fully optimized through MPWB1K/6-31 + G(d,p) calculations, final energies being recomputed at the LMP2/aug-cc-pVTZ(-f) level of theory. Unexpectedly, galanthamine is found to interact mainly from its hydrogen-bond donor groups. Among those, CH groups in the vicinity of the ammonium group are prominent. The trends obtained provide rationales to the predilection of the equatorial orientation of the galanthamine N-methyl substituent for binding to AChE. The analysis of the interaction energies pointed out the independence between the various interaction sites and the rigid character of galanthamine. The comparison between the cluster calculations and the crystallographic observations in galanthamine-AChE co-crystals allows the validation of the theoretical methodology. In particular, the positions of several water molecules appearing as strongly conserved in galanthamine-AChE co-crystals are predicted by the calculations. Moreover, the experimental position and orientation of lateral chains of functionally important aminoacid residues are in close agreement with the ones predicted theoretically. Our study provides relevant information for a rational drug design of galanthamine based AChE inhibitors.
Zhao, Jiangsan; Bodner, Gernot; Rewald, Boris
2016-01-01
Phenotyping local crop cultivars is becoming more and more important, as they are an important genetic source for breeding – especially in regard to inherent root system architectures. Machine learning algorithms are promising tools to assist in the analysis of complex data sets; novel approaches are need to apply them on root phenotyping data of mature plants. A greenhouse experiment was conducted in large, sand-filled columns to differentiate 16 European Pisum sativum cultivars based on 36 manually derived root traits. Through combining random forest and support vector machine models, machine learning algorithms were successfully used for unbiased identification of most distinguishing root traits and subsequent pairwise cultivar differentiation. Up to 86% of pea cultivar pairs could be distinguished based on top five important root traits (Timp5) – Timp5 differed widely between cultivar pairs. Selecting top important root traits (Timp) provided a significant improved classification compared to using all available traits or randomly selected trait sets. The most frequent Timp of mature pea cultivars was total surface area of lateral roots originating from tap root segments at 0–5 cm depth. The high classification rate implies that culturing did not lead to a major loss of variability in root system architecture in the studied pea cultivars. Our results illustrate the potential of machine learning approaches for unbiased (root) trait selection and cultivar classification based on rather small, complex phenotypic data sets derived from pot experiments. Powerful statistical approaches are essential to make use of the increasing amount of (root) phenotyping information, integrating the complex trait sets describing crop cultivars. PMID:27999587
Lan, Hongzhi; Khismatullin, Damir B
2014-07-01
Leukocytes and other circulating cells deform and move relatively to the channel flow in the lateral and translational directions. Their migratory property is important in immune response, hemostasis, cancer progression, delivery of nutrients, and microfluidic technologies such as cell separation and enrichment, and flow cytometry. Using our three-dimensional computational algorithm for multiphase viscoelastic flow, we have investigated the effect of pairwise interaction on the lateral and translational migration of circulating cells in a microchannel. The numerical simulation data show that when two cells with the same size and small separation distance interact, repulsive interaction take place until they reach the same lateral equilibrium position. During this process, they undergo swapping or passing, depending on the initial separation distance between each other. The threshold value of this distance increases with cell deformation, indicating that the cells experiencing larger deformation are more likely to swap. When a series of closely spaced cells with the same size are considered, they generally undergo damped oscillation in both lateral and translational directions until they reach equilibrium positions where they become evenly distributed in the flow direction (self-assembly phenomenon). A series of cells with a large lateral separation distance could collide repeatedly with each other, eventually crossing the centerline and entering the other side of the channel. For a series of cells with different deformability, more deformable cells, upon impact with less deformable cells, move to an equilibrium position closer to the centerline. The results of our study show that the bulk deformation of circulating cells plays a key role in their migration in a microchannel.
Pollard, Daniel A; Moses, Alan M; Iyer, Venky N; Eisen, Michael B
2006-01-01
Background Molecular evolutionary studies of noncoding sequences rely on multiple alignments. Yet how multiple alignment accuracy varies across sequence types, tree topologies, divergences and tools, and further how this variation impacts specific inferences, remains unclear. Results Here we develop a molecular evolution simulation platform, CisEvolver, with models of background noncoding and transcription factor binding site evolution, and use simulated alignments to systematically examine multiple alignment accuracy and its impact on two key molecular evolutionary inferences: transcription factor binding site conservation and divergence estimation. We find that the accuracy of multiple alignments is determined almost exclusively by the pairwise divergence distance of the two most diverged species and that additional species have a negligible influence on alignment accuracy. Conserved transcription factor binding sites align better than surrounding noncoding DNA yet are often found to be misaligned at relatively short divergence distances, such that studies of binding site gain and loss could easily be confounded by alignment error. Divergence estimates from multiple alignments tend to be overestimated at short divergence distances but reach a tool specific divergence at which they cease to increase, leading to underestimation at long divergences. Our most striking finding was that overall alignment accuracy, binding site alignment accuracy and divergence estimation accuracy vary greatly across branches in a tree and are most accurate for terminal branches connecting sister taxa and least accurate for internal branches connecting sub-alignments. Conclusion Our results suggest that variation in alignment accuracy can lead to errors in molecular evolutionary inferences that could be construed as biological variation. These findings have implications for which species to choose for analyses, what kind of errors would be expected for a given set of species and how
Fast Pairwise Structural RNA Alignments by Pruning of the Dynamical Programming Matrix
Havgaard, Jakob H; Torarinsson, Elfar; Gorodkin, Jan
2007-01-01
It has become clear that noncoding RNAs (ncRNA) play important roles in cells, and emerging studies indicate that there might be a large number of unknown ncRNAs in mammalian genomes. There exist computational methods that can be used to search for ncRNAs by comparing sequences from different genomes. One main problem with these methods is their computational complexity, and heuristics are therefore employed. Two heuristics are currently very popular: pre-folding and pre-aligning. However, these heuristics are not ideal, as pre-aligning is dependent on sequence similarity that may not be present and pre-folding ignores the comparative information. Here, pruning of the dynamical programming matrix is presented as an alternative novel heuristic constraint. All subalignments that do not exceed a length-dependent minimum score are discarded as the matrix is filled out, thus giving the advantage of providing the constraints dynamically. This has been included in a new implementation of the FOLDALIGN algorithm for pairwise local or global structural alignment of RNA sequences. It is shown that time and memory requirements are dramatically lowered while overall performance is maintained. Furthermore, a new divide and conquer method is introduced to limit the memory requirement during global alignment and backtrack of local alignment. All branch points in the computed RNA structure are found and used to divide the structure into smaller unbranched segments. Each segment is then realigned and backtracked in a normal fashion. Finally, the FOLDALIGN algorithm has also been updated with a better memory implementation and an improved energy model. With these improvements in the algorithm, the FOLDALIGN software package provides the molecular biologist with an efficient and user-friendly tool for searching for new ncRNAs. The software package is available for download at http://foldalign.ku.dk. PMID:17937495
Pairwise additivity in the nuclear magnetic resonance interactions of atomic xenon.
Hanni, Matti; Lantto, Perttu; Vaara, Juha
2009-04-14
Nuclear magnetic resonance (NMR) of atomic (129/131)Xe is used as a versatile probe of the structure and dynamics of various host materials, due to the sensitivity of the Xe NMR parameters to intermolecular interactions. The principles governing this sensitivity can be investigated using the prototypic system of interacting Xe atoms. In the pairwise additive approximation (PAA), the binary NMR chemical shift, nuclear quadrupole coupling (NQC), and spin-rotation (SR) curves for the xenon dimer are utilized for fast and efficient evaluation of the corresponding NMR tensors in small xenon clusters Xe(n) (n = 2-12). If accurate, the preparametrized PAA enables the analysis of the NMR properties of xenon clusters, condensed xenon phases, and xenon gas without having to resort to electronic structure calculations of instantaneous configurations for n > 2. The binary parameters for Xe(2) at different internuclear distances were obtained at the nonrelativistic Hartree-Fock level of theory. Quantum-chemical (QC) calculations at the corresponding level were used to obtain the NMR parameters of the Xe(n) (n = 2-12) clusters at the equilibrium geometries. Comparison of PAA and QC data indicates that the direct use of the binary property curves of Xe(2) can be expected to be well-suited for the analysis of Xe NMR in the gaseous phase dominated by binary collisions. For use in condensed phases where many-body effects should be considered, effective binary property functions were fitted using the principal components of QC tensors from Xe(n) clusters. Particularly, the chemical shift in Xe(n) is strikingly well-described by the effective PAA. The coordination number Z of the Xe site is found to be the most important factor determining the chemical shift, with the largest shifts being found for high-symmetry sites with the largest Z. This is rationalized in terms of the density of virtual electronic states available for response to magnetic perturbations.
Uncovering New Pathogen–Host Protein–Protein Interactions by Pairwise Structure Similarity
Cui, Tao; Li, Weihui; Liu, Lei; Huang, Qiaoyun; He, Zheng-Guo
2016-01-01
Pathogens usually evade and manipulate host-immune pathways through pathogen–host protein–protein interactions (PPIs) to avoid being killed by the host immune system. Therefore, uncovering pathogen–host PPIs is critical for determining the mechanisms underlying pathogen infection and survival. In this study, we developed a computational method, which we named pairwise structure similarity (PSS)-PPI, to predict pathogen–host PPIs. First, a high-quality and non-redundant structure–structure interaction (SSI) template library was constructed by exhaustively exploring heteromeric protein complex structures in the PDB database. New interactions were then predicted by searching for PSS with complex structures in the SSI template library. A quantitative score named the PSS score, which integrated structure similarity and residue–residue contact-coverage information, was used to describe the overall similarity of each predicted interaction with the corresponding SSI template. Notably, PSS-PPI yielded experimentally confirmed pathogen–host PPIs of human immunodeficiency virus type 1 (HIV-1) with performance close to that of in vitro high-throughput screening approaches. Finally, a pathogen–host PPI network of human pathogen Mycobacterium tuberculosis, the causative agent of tuberculosis, was constructed using PSS-PPI and refined using filtration steps based on cellular localization information. Analysis of the resulting network indicated that secreted proteins of the STPK, ESX-1, and PE/PPE family in M. tuberculosis targeted human proteins involved in immune response and phagocytosis. M. tuberculosis also targeted host factors known to regulate HIV replication. Taken together, our findings provide insights into the survival mechanisms of M. tuberculosis in human hosts, as well as co-infection of tuberculosis and HIV. With the rapid pace of three-dimensional protein structure discovery, the SSI template library we constructed and the PSS-PPI method we devised
Alfred Stadler, Franz Gross
2010-10-01
We provide a short overview of the Covariant Spectator Theory and its applications. The basic ideas are introduced through the example of a {phi}{sup 4}-type theory. High-precision models of the two-nucleon interaction are presented and the results of their use in calculations of properties of the two- and three-nucleon systems are discussed. A short summary of applications of this framework to other few-body systems is also presented.
Bryan, M.F.; Piepel, G.F.; Simpson, D.B.
1996-03-01
The high-level waste (HLW) vitrification plant at the Hanford Site was being designed to transuranic and high-level radioactive waste in borosilicate class. Each batch of plant feed material must meet certain requirements related to plant performance, and the resulting class must meet requirements imposed by the Waste Acceptance Product Specifications. Properties of a process batch and the resultlng glass are largely determined by the composition of the feed material. Empirical models are being developed to estimate some property values from data on feed composition. Methods for checking and documenting compliance with feed and glass requirements must account for various types of uncertainties. This document focuses on the estimation. manipulation, and consequences of composition uncertainty, i.e., the uncertainty inherent in estimates of feed or glass composition. Three components of composition uncertainty will play a role in estimating and checking feed and glass properties: batch-to-batch variability, within-batch uncertainty, and analytical uncertainty. In this document, composition uncertainty and its components are treated in terms of variances and variance components or univariate situations, covariance matrices and covariance components for multivariate situations. The importance of variance and covariance components stems from their crucial role in properly estimating uncertainty In values calculated from a set of observations on a process batch. Two general types of methods for estimating uncertainty are discussed: (1) methods based on data, and (2) methods based on knowledge, assumptions, and opinions about the vitrification process. Data-based methods for estimating variances and covariance matrices are well known. Several types of data-based methods exist for estimation of variance components; those based on the statistical method analysis of variance are discussed, as are the strengths and weaknesses of this approach.
Adams, Dean C.; Felice, Ryan N.
2014-01-01
Morphological integration describes the degree to which sets of organismal traits covary with one another. Morphological covariation may be evaluated at various levels of biological organization, but when characterizing such patterns across species at the macroevolutionary level, phylogeny must be taken into account. We outline an analytical procedure based on the evolutionary covariance matrix that allows species-level patterns of morphological integration among structures defined by sets of traits to be evaluated while accounting for the phylogenetic relationships among taxa, providing a flexible and robust complement to related phylogenetic independent contrasts based approaches. Using computer simulations under a Brownian motion model we show that statistical tests based on the approach display appropriate Type I error rates and high statistical power for detecting known levels of integration, and these trends remain consistent for simulations using different numbers of species, and for simulations that differ in the number of trait dimensions. Thus, our procedure provides a useful means of testing hypotheses of morphological integration in a phylogenetic context. We illustrate the utility of this approach by evaluating evolutionary patterns of morphological integration in head shape for a lineage of Plethodon salamanders, and find significant integration between cranial shape and mandible shape. Finally, computer code written in R for implementing the procedure is provided. PMID:24728003
A subsurface add-on for standard atomic force microscopes.
Verbiest, G J; van der Zalm, D J; Oosterkamp, T H; Rost, M J
2015-03-01
The application of ultrasound in an Atomic Force Microscope (AFM) gives access to subsurface information. However, no commercially AFM exists that is equipped with this technique. The main problems are the electronic crosstalk in the AFM setup and the insufficiently strong excitation of the cantilever at ultrasonic (MHz) frequencies. In this paper, we describe the development of an add-on that provides a solution to these problems by using a special piezo element with a lowest resonance frequency of 2.5 MHz and by separating the electronic connection for this high frequency piezo element from all other connections. In this sense, we support researches with the possibility to perform subsurface measurements with their existing AFMs and hopefully pave also the way for the development of a commercial AFM that is capable of imaging subsurface features with nanometer resolution.
A subsurface add-on for standard atomic force microscopes
Verbiest, G. J.; Zalm, D. J. van der; Oosterkamp, T. H.; Rost, M. J.
2015-03-15
The application of ultrasound in an Atomic Force Microscope (AFM) gives access to subsurface information. However, no commercially AFM exists that is equipped with this technique. The main problems are the electronic crosstalk in the AFM setup and the insufficiently strong excitation of the cantilever at ultrasonic (MHz) frequencies. In this paper, we describe the development of an add-on that provides a solution to these problems by using a special piezo element with a lowest resonance frequency of 2.5 MHz and by separating the electronic connection for this high frequency piezo element from all other connections. In this sense, we support researches with the possibility to perform subsurface measurements with their existing AFMs and hopefully pave also the way for the development of a commercial AFM that is capable of imaging subsurface features with nanometer resolution.
Add-on gabapentin in the treatment of opiate withdrawal.
Martínez-Raga, José; Sabater, Ana; Perez-Galvez, Bartolome; Castellano, Miguel; Cervera, Gaspar
2004-05-01
Gabapentin is an antiepileptic drug shown to be effective in the treatment of pain disorders and appears to be useful as well for several psychiatric disorders, including bipolar disorder, anxiety disorders, alcohol withdrawal and cocaine dependence. Gabapentin, at a dose of 600 mg three times a day, was evaluated as an add-on medication to a standard detoxification regime in seven heroin dependent individuals undergoing outpatient opiate withdrawal treatment. All seven patients successfully completed opiate detoxification and commenced opiate antagonist treatment with naltrexone on day five of withdrawal treatment, as scheduled. No adverse event was noted. Gabapentin appeared to lead a reduction in symptomatic medication and an overall beneficial effect on symptoms of heroin withdrawal.
Conditional Covariance Theory and Detect for Polytomous Items
ERIC Educational Resources Information Center
Zhang, Jinming
2007-01-01
This paper extends the theory of conditional covariances to polytomous items. It has been proven that under some mild conditions, commonly assumed in the analysis of response data, the conditional covariance of two items, dichotomously or polytomously scored, given an appropriately chosen composite is positive if, and only if, the two items…
Perturbative approach to covariance matrix of the matter power spectrum
Mohammed, Irshad; Seljak, Uros; Vlah, Zvonimir
2016-06-30
We evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (beat coupling or super-sample variance), and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10\\% level up to $k \\sim 1 h {\\rm Mpc^{-1}}$. We show that all the connected components are dominated by the large-scale modes ($k<0.1 h {\\rm Mpc^{-1}}$), regardless of the value of the wavevectors $k,\\, k'$ of the covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher $k$ it is dominated by a single eigenmode. The full covariance matrix can be approximated as the disconnected part only, with the connected part being treated as an external nuisance parameter with a known scale dependence, and a known prior on its variance for a given survey volume. Finally, we provide a prescription for how to evaluate the covariance matrix from small box simulations without the need to simulate large volumes.
Alternative Multiple Imputation Inference for Mean and Covariance Structure Modeling
ERIC Educational Resources Information Center
Lee, Taehun; Cai, Li
2012-01-01
Model-based multiple imputation has become an indispensable method in the educational and behavioral sciences. Mean and covariance structure models are often fitted to multiply imputed data sets. However, the presence of multiple random imputations complicates model fit testing, which is an important aspect of mean and covariance structure…
Covariate-Based Assignment to Treatment Groups: Some Simulation Results.
ERIC Educational Resources Information Center
Jain, Ram B.; Hsu, Tse-Chi
1980-01-01
Six estimators of treatment effect when assignment to treatment groups is based on the covariate are compared in terms of empirical standard errors and percent relative bias. Results show that simple analysis of covariance estimator is not always appropriate. (Author/GK)
Handling Correlations between Covariates and Random Slopes in Multilevel Models
ERIC Educational Resources Information Center
Bates, Michael David; Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders
2014-01-01
This article discusses estimation of multilevel/hierarchical linear models that include cluster-level random intercepts and random slopes. Viewing the models as structural, the random intercepts and slopes represent the effects of omitted cluster-level covariates that may be correlated with included covariates. The resulting correlations between…
Performance of internal covariance estimators for cosmic shear correlation functions
NASA Astrophysics Data System (ADS)
Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.
2016-03-01
Data re-sampling methods such as delete-one jackknife, bootstrap or the sub-sample covariance are common tools for estimating the covariance of large-scale structure probes. We investigate different implementations of these methods in the context of cosmic shear two-point statistics. Using lognormal simulations of the convergence field and the corresponding shear field we generate mock catalogues of a known and realistic covariance. For a survey of {˜ } 5000 ° ^2 we find that jackknife, if implemented by deleting sub-volumes of galaxies, provides the most reliable covariance estimates. Bootstrap, in the common implementation of drawing sub-volumes of galaxies, strongly overestimates the statistical uncertainties. In a forecast for the complete 5-yr Dark Energy Survey, we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in the Ωm-σ8 plane as measured with internally estimated covariance matrices is on average ≳85 per cent of the volume derived from the true covariance matrix. The uncertainty on the parameter combination Σ _8 ˜ σ _8 Ω _m^{0.5} derived from internally estimated covariances is ˜90 per cent of the true uncertainty.
Covariate Balance in Bayesian Propensity Score Approaches for Observational Studies
ERIC Educational Resources Information Center
Chen, Jianshen; Kaplan, David
2015-01-01
Bayesian alternatives to frequentist propensity score approaches have recently been proposed. However, few studies have investigated their covariate balancing properties. This article compares a recently developed two-step Bayesian propensity score approach to the frequentist approach with respect to covariate balance. The effects of different…
The Use of Covariation as a Principle of Causal Analysis
ERIC Educational Resources Information Center
Shultz, Thomas R.; Mendelson, Rosyln
1975-01-01
This study investigated the use of covariation as a principle of causal analysis in children 3-4, 6-7, and 9-11 years of age. The results indicated that children as young as 3 years were capable of using covariation information in their attributions of simple physical effects. (Author/CS)
Covariation Is a Poor Measure of Molecular Coevolution.
Talavera, David; Lovell, Simon C; Whelan, Simon
2015-09-01
Recent developments in the analysis of amino acid covariation are leading to breakthroughs in protein structure prediction, protein design, and prediction of the interactome. It is assumed that observed patterns of covariation are caused by molecular coevolution, where substitutions at one site affect the evolutionary forces acting at neighboring sites. Our theoretical and empirical results cast doubt on this assumption. We demonstrate that the strongest coevolutionary signal is a decrease in evolutionary rate and that unfeasibly long times are required to produce coordinated substitutions. We find that covarying substitutions are mostly found on different branches of the phylogenetic tree, indicating that they are independent events that may or may not be attributable to coevolution. These observations undermine the hypothesis that molecular coevolution is the primary cause of the covariation signal. In contrast, we find that the pairs of residues with the strongest covariation signal tend to have low evolutionary rates, and that it is this low rate that gives rise to the covariation signal. Slowly evolving residue pairs are disproportionately located in the protein's core, which explains covariation methods' ability to detect pairs of residues that are close in three dimensions. These observations lead us to propose the "coevolution paradox": The strength of coevolution required to cause coordinated changes means the evolutionary rate is so low that such changes are highly unlikely to occur. As modern covariation methods may lead to breakthroughs in structural genomics, it is critical to recognize their biases and limitations.
Empirical Performance of Covariates in Education Observational Studies
ERIC Educational Resources Information Center
Wong, Vivian C.; Valentine, Jeffrey C.; Miller-Bains, Kate
2017-01-01
This article summarizes results from 12 empirical evaluations of observational methods in education contexts. We look at the performance of three common covariate-types in observational studies where the outcome is a standardized reading or math test. They are: pretest measures, local geographic matching, and rich covariate sets with a strong…
Perturbative approach to covariance matrix of the matter power spectrum
NASA Astrophysics Data System (ADS)
Mohammed, Irshad; Seljak, Uroš; Vlah, Zvonimir
2017-04-01
We evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (supersample variance) and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10 per cent level up to k ∼ 1 h Mpc-1. We show that all the connected components are dominated by the large-scale modes (k < 0.1 h Mpc-1), regardless of the value of the wave vectors k, k΄ of the covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher k, it is dominated by a single eigenmode. The full covariance matrix can be approximated as the disconnected part only, with the connected part being treated as an external nuisance parameter with a known scale dependence, and a known prior on its variance for a given survey volume. Finally, we provide a prescription for how to evaluate the covariance matrix from small box simulations without the need to simulate large volumes.
Choosing covariates in the analysis of clinical trials.
Beach, M L; Meier, P
1989-12-01
Much of the literature on clinical trials emphasizes the importance of adjusting the results for any covariates (baseline variables) for which randomization fails to produce nearly exact balance, but the literature is very nearly devoid of recipes for assessing the consequences of such adjustments. Several years ago, Paul Canner presented an approximate expression for the effect of a covariate adjustment, and he considered its use in the selection of covariates. With the aid of Canner's equation, using both formal analysis and simulation, the impact of covariate adjustment is further explored. Unless tight control over the analysis plans is established in advance, covariate adjustment can lead to seriously misleading inferences. Illustrations from the clinical trials literature are provided.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
UDU/T/ covariance factorization for Kalman filtering
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1980-01-01
There has been strong motivation to produce numerically stable formulations of the Kalman filter algorithms because it has long been known that the original discrete-time Kalman formulas are numerically unreliable. Numerical instability can be avoided by propagating certain factors of the estimate error covariance matrix rather than the covariance matrix itself. This paper documents filter algorithms that correspond to the covariance factorization P = UDU(T), where U is a unit upper triangular matrix and D is diagonal. Emphasis is on computational efficiency and numerical stability, since these properties are of key importance in real-time filter applications. The history of square-root and U-D covariance filters is reviewed. Simple examples are given to illustrate the numerical inadequacy of the Kalman covariance filter algorithms; these examples show how factorization techniques can give improved computational reliability.
The Source for ADD/ADHD: Attention Deficit Disorder and Attention Deficit/Hyperactivity Disorder.
ERIC Educational Resources Information Center
Richard, Gail J.; Russell, Joy L.
This book is intended for professionals who are responsible for designing and implementing educational programs for children with attention deficit disorders and attention deficit/hyperactivity disorder (ADD/ADHD). Chapters address: (1) myths and realities about ADD/ADHD; (2) definitions, disorders associated with ADD/ADHD, and federal educational…
24 CFR 983.206 - HAP contract amendments (to add or substitute contract units).
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false HAP contract amendments (to add or... Contract § 983.206 HAP contract amendments (to add or substitute contract units). (a) Amendment to... substitute unit and must determine the reasonable rent for such unit. (b) Amendment to add contract units....
12 CFR 502.60 - When will OTS adjust, add, waive, or eliminate a fee?
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 6 2014-01-01 2012-01-01 true When will OTS adjust, add, waive, or eliminate a... ASSESSMENTS AND FEES Fees § 502.60 When will OTS adjust, add, waive, or eliminate a fee? Under unusual circumstances, the Director may deem it necessary or appropriate to adjust, add, waive, or eliminate a fee....
24 CFR 983.206 - HAP contract amendments (to add or substitute contract units).
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false HAP contract amendments (to add or... Contract § 983.206 HAP contract amendments (to add or substitute contract units). (a) Amendment to... substitute unit and must determine the reasonable rent for such unit. (b) Amendment to add contract units....
24 CFR 983.206 - HAP contract amendments (to add or substitute contract units).
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 4 2014-04-01 2014-04-01 false HAP contract amendments (to add or... Contract § 983.206 HAP contract amendments (to add or substitute contract units). (a) Amendment to... substitute unit and must determine the reasonable rent for such unit. (b) Amendment to add contract units....
24 CFR 983.206 - HAP contract amendments (to add or substitute contract units).
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false HAP contract amendments (to add or... Contract § 983.206 HAP contract amendments (to add or substitute contract units). (a) Amendment to... substitute unit and must determine the reasonable rent for such unit. (b) Amendment to add contract units....
5 CFR 330.105 - Instructions on how to add a vacancy announcement to USAJOBS.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Instructions on how to add a vacancy... Service § 330.105 Instructions on how to add a vacancy announcement to USAJOBS. An agency can find the instructions to add a vacancy announcement to USAJOBS on OPM's Web site at http://www.usajobs.gov....
12 CFR 502.60 - When will OTS adjust, add, waive, or eliminate a fee?
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 6 2012-01-01 2012-01-01 false When will OTS adjust, add, waive, or eliminate... TREASURY ASSESSMENTS AND FEES Fees § 502.60 When will OTS adjust, add, waive, or eliminate a fee? Under unusual circumstances, the Director may deem it necessary or appropriate to adjust, add, waive,...
24 CFR 983.206 - HAP contract amendments (to add or substitute contract units).
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false HAP contract amendments (to add or... Contract § 983.206 HAP contract amendments (to add or substitute contract units). (a) Amendment to... substitute unit and must determine the reasonable rent for such unit. (b) Amendment to add contract units....
5 CFR 330.105 - Instructions on how to add a vacancy announcement to USAJOBS.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Instructions on how to add a vacancy... Service § 330.105 Instructions on how to add a vacancy announcement to USAJOBS. An agency can find the instructions to add a vacancy announcement to USAJOBS on OPM's Web site at http://www.usajobs.gov....
12 CFR 502.60 - When will OTS adjust, add, waive, or eliminate a fee?
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false When will OTS adjust, add, waive, or eliminate... TREASURY ASSESSMENTS AND FEES Fees § 502.60 When will OTS adjust, add, waive, or eliminate a fee? Under unusual circumstances, the Director may deem it necessary or appropriate to adjust, add, waive,...
12 CFR 502.60 - When will OTS adjust, add, waive, or eliminate a fee?
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 5 2011-01-01 2011-01-01 false When will OTS adjust, add, waive, or eliminate... TREASURY ASSESSMENTS AND FEES Fees § 502.60 When will OTS adjust, add, waive, or eliminate a fee? Under unusual circumstances, the Director may deem it necessary or appropriate to adjust, add, waive,...
5 CFR 330.105 - Instructions on how to add a vacancy announcement to USAJOBS.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Instructions on how to add a vacancy... Service § 330.105 Instructions on how to add a vacancy announcement to USAJOBS. An agency can find the instructions to add a vacancy announcement to USAJOBS on OPM's Web site at http://www.usajobs.gov....
12 CFR 502.60 - When will OTS adjust, add, waive, or eliminate a fee?
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 6 2013-01-01 2012-01-01 true When will OTS adjust, add, waive, or eliminate a... ASSESSMENTS AND FEES Fees § 502.60 When will OTS adjust, add, waive, or eliminate a fee? Under unusual circumstances, the Director may deem it necessary or appropriate to adjust, add, waive, or eliminate a fee....
ERIC Educational Resources Information Center
Eaton, Shevawn; Wyland, Sharon
1996-01-01
Examines the research and theory about attention deficit disorder (ADD) in college students and discusses how learning assistance professionals can better assist college students with ADD. Appended in this article are strategies for faculty/learning center professionals in accommodating students with ADD and a list of suggested readings. Contains…
Central subspace dimensionality reduction using covariance operators.
Kim, Minyoung; Pavlovic, Vladimir
2011-04-01
We consider the task of dimensionality reduction informed by real-valued multivariate labels. The problem is often treated as Dimensionality Reduction for Regression (DRR), whose goal is to find a low-dimensional representation, the central subspace, of the input data that preserves the statistical correlation with the targets. A class of DRR methods exploits the notion of inverse regression (IR) to discover central subspaces. Whereas most existing IR techniques rely on explicit output space slicing, we propose a novel method called the Covariance Operator Inverse Regression (COIR) that generalizes IR to nonlinear input/output spaces without explicit target slicing. COIR's unique properties make DRR applicable to problem domains with high-dimensional output data corrupted by potentially significant amounts of noise. Unlike recent kernel dimensionality reduction methods that employ iterative nonconvex optimization, COIR yields a closed-form solution. We also establish the link between COIR, other DRR techniques, and popular supervised dimensionality reduction methods, including canonical correlation analysis and linear discriminant analysis. We then extend COIR to semi-supervised settings where many of the input points lack their labels. We demonstrate the benefits of COIR on several important regression problems in both fully supervised and semi-supervised settings.
Generalized Covariant Gyrokinetic Dynamics of Magnetoplasmas
Cremaschini, C.; Tessarotto, M.; Nicolini, P.; Beklemishev, A.
2008-12-31
A basic prerequisite for the investigation of relativistic astrophysical magnetoplasmas, occurring typically in the vicinity of massive stellar objects (black holes, neutron stars, active galactic nuclei, etc.), is the accurate description of single-particle covariant dynamics, based on gyrokinetic theory (Beklemishev et al., 1999-2005). Provided radiation-reaction effects are negligible, this is usually based on the assumption that both the space-time metric and the EM fields (in particular the magnetic field) are suitably prescribed and are considered independent of single-particle dynamics, while allowing for the possible presence of gravitational/EM perturbations driven by plasma collective interactions which may naturally arise in such systems. The purpose of this work is the formulation of a generalized gyrokinetic theory based on the synchronous variational principle recently pointed out (Tessarotto et al., 2007) which permits to satisfy exactly the physical realizability condition for the four-velocity. The theory here developed includes the treatment of nonlinear perturbations (gravitational and/or EM) characterized locally, i.e., in the rest frame of a test particle, by short wavelength and high frequency. Basic feature of the approach is to ensure the validity of the theory both for large and vanishing parallel electric field. It is shown that the correct treatment of EM perturbations occurring in the presence of an intense background magnetic field generally implies the appearance of appropriate four-velocity corrections, which are essential for the description of single-particle gyrokinetic dynamics.
Holographic bound in covariant loop quantum gravity
NASA Astrophysics Data System (ADS)
Tamaki, Takashi
2016-07-01
We investigate puncture statistics based on the covariant area spectrum in loop quantum gravity. First, we consider Maxwell-Boltzmann statistics with a Gibbs factor for punctures. We establish formulas which relate physical quantities such as horizon area to the parameter characterizing holographic degrees of freedom. We also perform numerical calculations and obtain consistency with these formulas. These results tell us that the holographic bound is satisfied in the large area limit and the correction term of the entropy-area law can be proportional to the logarithm of the horizon area. Second, we also consider Bose-Einstein statistics and show that the above formulas are also useful in this case. By applying the formulas, we can understand intrinsic features of Bose-Einstein condensate which corresponds to the case when the horizon area almost consists of punctures in the ground state. When this phenomena occurs, the area is approximately constant against the parameter characterizing the temperature. When this phenomena is broken, the area shows rapid increase which suggests the phase transition from quantum to classical area.
General covariance from the quantum renormalization group
NASA Astrophysics Data System (ADS)
Shyam, Vasudev
2017-03-01
The quantum renormalization group (QRG) is a realization of holography through a coarse-graining prescription that maps the beta functions of a quantum field theory thought to live on the "boundary" of some space to holographic actions in the "bulk" of this space. A consistency condition will be proposed that translates into general covariance of the gravitational theory in the D +1 dimensional bulk. This emerges from the application of the QRG on a planar matrix field theory living on the D dimensional boundary. This will be a particular form of the Wess-Zumino consistency condition that the generating functional of the boundary theory needs to satisfy. In the bulk, this condition forces the Poisson bracket algebra of the scalar and vector constraints of the dual gravitational theory to close in a very specific manner, namely, the manner in which the corresponding constraints of general relativity do. A number of features of the gravitational theory will be fixed as a consequence of this form of the Poisson bracket algebra. In particular, it will require the metric beta function to be of the gradient form.
Frame Indifferent (Truly Covariant) Formulation of Electrodynamics
NASA Astrophysics Data System (ADS)
Christov, Christo
2010-10-01
The Electromagnetic field is considered from the point of view of mechanics of continuum. It is shown that Maxwell's equations are mathematically strict corollaries form the equation of motions of an elastic incompressible liquid. If the concept of frame-indifference (material invariance) is applied to the model of elastic liquid, then the partial time derivatives have to be replaced by the convective time derivative in the momentum equations, and by the Oldroyd upper-convected derivative in the constitutive relation. The convective/convected terms involve the velocity at a point of the field, and as a result, when deriving the Maxwell form of the equations, one arrives at equations which contain both the terms of Maxwell's equation and the so-called laws of motional EMF: Faraday's, Oersted--Ampere's, and the Lorentz-force law. Thus a unification of the electromagnetism is achieved. Since the new model is frame indifferent, it is truly covariant in the sense that the governing system is invariant when changing to a coordinate frame that can accelerate or even deform in time.
CMB lens sample covariance and consistency relations
NASA Astrophysics Data System (ADS)
Motloch, Pavel; Hu, Wayne; Benoit-Lévy, Aurélien
2017-02-01
Gravitational lensing information from the two and higher point statistics of the cosmic microwave background (CMB) temperature and polarization fields are intrinsically correlated because they are lensed by the same realization of structure between last scattering and observation. Using an analytic model for lens sample covariance, we show that there is one mode, separately measurable in the lensed CMB power spectra and lensing reconstruction, that carries most of this correlation. Once these measurements become lens sample variance dominated, this mode should provide a useful consistency check between the observables that is largely free of sampling and cosmological parameter errors. Violations of consistency could indicate systematic errors in the data and lens reconstruction or new physics at last scattering, any of which could bias cosmological inferences and delensing for gravitational waves. A second mode provides a weaker consistency check for a spatially flat universe. Our analysis isolates the additional information supplied by lensing in a model-independent manner but is also useful for understanding and forecasting CMB cosmological parameter errors in the extended Λ cold dark matter parameter space of dark energy, curvature, and massive neutrinos. We introduce and test a simple but accurate forecasting technique for this purpose that neither double counts lensing information nor neglects lensing in the observables.
A New Approach for Nuclear Data Covariance and Sensitivity Generation
Leal, L.C.; Larson, N.M.; Derrien, H.; Kawano, T.; Chadwick, M.B.
2005-05-24
Covariance data are required to correctly assess uncertainties in design parameters in nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the U.S. Evaluated Nuclear Data File, ENDF/B. The uncertainty files in the ENDF/B library are obtained from the analysis of experimental data and are stored as variance and covariance data. The computer code SAMMY is used in the analysis of the experimental data in the resolved and unresolved resonance energy regions. The data fitting of cross sections is based on generalized least-squares formalism (Bayes' theory) together with the resonance formalism described by R-matrix theory. Two approaches are used in SAMMY for the generation of resonance-parameter covariance data. In the evaluation process SAMMY generates a set of resonance parameters that fit the data, and, in addition, it also provides the resonance-parameter covariances. For existing resonance-parameter evaluations where no resonance-parameter covariance data are available, the alternative is to use an approach called the 'retroactive' resonance-parameter covariance generation. In the high-energy region the methodology for generating covariance data consists of least-squares fitting and model parameter adjustment. The least-squares fitting method calculates covariances directly from experimental data. The parameter adjustment method employs a nuclear model calculation such as the optical model and the Hauser-Feshbach model, and estimates a covariance for the nuclear model parameters. In this paper we describe the application of the retroactive method and the parameter adjustment method to generate covariance data for the gadolinium isotopes.
Correcting eddy-covariance flux underestimates over a grassland.
Twine, T. E.; Kustas, W. P.; Norman, J. M.; Cook, D. R.; Houser, P. R.; Meyers, T. P.; Prueger, J. H.; Starks, P. J.; Wesely, M. L.; Environmental Research; Univ. of Wisconsin at Madison; DOE; National Aeronautics and Space Administration; National Oceanic and Atmospheric Administrationoratory
2000-06-08
Independent measurements of the major energy balance flux components are not often consistent with the principle of conservation of energy. This is referred to as a lack of closure of the surface energy balance. Most results in the literature have shown the sum of sensible and latent heat fluxes measured by eddy covariance to be less than the difference between net radiation and soil heat fluxes. This under-measurement of sensible and latent heat fluxes by eddy-covariance instruments has occurred in numerous field experiments and among many different manufacturers of instruments. Four eddy-covariance systems consisting of the same models of instruments were set up side-by-side during the Southern Great Plains 1997 Hydrology Experiment and all systems under-measured fluxes by similar amounts. One of these eddy-covariance systems was collocated with three other types of eddy-covariance systems at different sites; all of these systems under-measured the sensible and latent-heat fluxes. The net radiometers and soil heat flux plates used in conjunction with the eddy-covariance systems were calibrated independently and measurements of net radiation and soil heat flux showed little scatter for various sites. The 10% absolute uncertainty in available energy measurements was considerably smaller than the systematic closure problem in the surface energy budget, which varied from 10 to 30%. When available-energy measurement errors are known and modest, eddy-covariance measurements of sensible and latent heat fluxes should be adjusted for closure. Although the preferred method of energy balance closure is to maintain the Bowen-ratio, the method for obtaining closure appears to be less important than assuring that eddy-covariance measurements are consistent with conservation of energy. Based on numerous measurements over a sorghum canopy, carbon dioxide fluxes, which are measured by eddy covariance, are underestimated by the same factor as eddy covariance evaporation
Monaco, James P.; Tomaszewski, John E.; Feldman, Michael D.; Hagemann, Ian; Moradi, Mehdi; Mousavi, Parvin; Boag, Alexander; Davidson, Chris; Abolmaesumi, Purang; Madabhushi, Anant
2010-01-01
In this paper we present a high-throughput system for detecting regions of carcinoma of the prostate (CaP) in HSs from radical prostatectomies (RPs) using probabilistic pairwise Markov models (PPMMs), a novel type of Markov random field (MRF). At diagnostic resolution a digitized HS can contain 80K×70K pixels — far too many for current automated Gleason grading algorithms to process. However, grading can be separated into two distinct steps: 1) detecting cancerous regions and 2) then grading these regions. The detection step does not require diagnostic resolution and can be performed much more quickly. Thus, we introduce a CaP detection system capable of analyzing an entire digitized whole-mount HS (2×1.75 cm2) in under three minutes (on a desktop computer) while achieving a CaP detection sensitivity and specificity of 0.87 and 0.90, respectively. We obtain this high-throughput by tailoring the system to analyze the HSs at low resolution (8 µm per pixel). This motivates the following algorithm: Step 1) glands are segmented, Step 2) the segmented glands are classified as malignant or benign, and Step 3) the malignant glands are consolidated into continuous regions. The classification of individual glands leverages two features: gland size and the tendency for proximate glands to share the same class. The latter feature describes a spatial dependency which we model using a Markov prior. Typically, Markov priors are expressed as the product of potential functions. Unfortunately, potential functions are mathematical abstractions, and constructing priors through their selection becomes an ad hoc procedure, resulting in simplistic models such as the Potts. Addressing this problem, we introduce PPMMs which formulate priors in terms of probability density functions, allowing the creation of more sophisticated models. To demonstrate the efficacy of our CaP detection system and assess the advantages of using a PPMM prior instead of the Potts, we alternately incorporate
ERIC Educational Resources Information Center
Quinn, Patricia O., Ed.
This handbook contains practical information and advice to help students with Attention Deficit Disorder (ADD) transition from high school to college. Part 1 provides an introduction to ADD and includes a questionnaire identifying the characteristics of a person with ADD. Part 2 describes life with ADD. It explains how ADD can affect high school…
Recurrence Analysis of Eddy Covariance Fluxes
NASA Astrophysics Data System (ADS)
Lange, Holger; Flach, Milan; Foken, Thomas; Hauhs, Michael
2015-04-01
The eddy covariance (EC) method is one key method to quantify fluxes in biogeochemical cycles in general, and carbon and energy transport across the vegetation-atmosphere boundary layer in particular. EC data from the worldwide net of flux towers (Fluxnet) have also been used to validate biogeochemical models. The high resolution data are usually obtained at 20 Hz sampling rate but are affected by missing values and other restrictions. In this contribution, we investigate the nonlinear dynamics of EC fluxes using Recurrence Analysis (RA). High resolution data from the site DE-Bay (Waldstein-Weidenbrunnen) and fluxes calculated at half-hourly resolution from eight locations (part of the La Thuile dataset) provide a set of very long time series to analyze. After careful quality assessment and Fluxnet standard gapfilling pretreatment, we calculate properties and indicators of the recurrent structure based both on Recurrence Plots as well as Recurrence Networks. Time series of RA measures obtained from windows moving along the time axis are presented. Their interpretation is guided by three different questions: (1) Is RA able to discern periods where the (atmospheric) conditions are particularly suitable to obtain reliable EC fluxes? (2) Is RA capable to detect dynamical transitions (different behavior) beyond those obvious from visual inspection? (3) Does RA contribute to an understanding of the nonlinear synchronization between EC fluxes and atmospheric parameters, which is crucial for both improving carbon flux models as well for reliable interpolation of gaps? (4) Is RA able to recommend an optimal time resolution for measuring EC data and for analyzing EC fluxes? (5) Is it possible to detect non-trivial periodicities with a global RA? We will demonstrate that the answers to all five questions is affirmative, and that RA provides insights into EC dynamics not easily obtained otherwise.
Inflation in general covariant theory of gravity
NASA Astrophysics Data System (ADS)
Huang, Yongqing; Wang, Anzhong; Wu, Qiang
2012-10-01
In this paper, we study inflation in the framework of the nonrelativistic general covariant theory of the Hořava-Lifshitz gravity with the projectability condition and an arbitrary coupling constant λ. We find that the Friedmann-Robterson-Walker (FRW) universe is necessarily flat in such a setup. We work out explicitly the linear perturbations of the flat FRW universe without specifying to a particular gauge, and find that the perturbations are different from those obtained in general relativity, because of the presence of the high-order spatial derivative terms. Applying the general formulas to a single scalar field, we show that in the sub-horizon regions, the metric and scalar field are tightly coupled and have the same oscillating frequencies. In the super-horizon regions, the perturbations become adiabatic, and the comoving curvature perturbation is constant. We also calculate the power spectra and indices of both the scalar and tensor perturbations, and express them explicitly in terms of the slow roll parameters and the coupling constants of the high-order spatial derivative terms. In particular, we find that the perturbations, of both scalar and tensor, are almost scale-invariant, and, with some reasonable assumptions on the coupling coefficients, the spectrum index of the tensor perturbation is the same as that given in the minimum scenario in general relativity (GR), whereas the index for scalar perturbation in general depends on λ and is different from the standard GR value. The ratio of the scalar and tensor power spectra depends on the high-order spatial derivative terms, and can be different from that of GR significantly.
Schwinger mechanism in linear covariant gauges
NASA Astrophysics Data System (ADS)
Aguilar, A. C.; Binosi, D.; Papavassiliou, J.
2017-02-01
In this work we explore the applicability of a special gluon mass generating mechanism in the context of the linear covariant gauges. In particular, the implementation of the Schwinger mechanism in pure Yang-Mills theories hinges crucially on the inclusion of massless bound-state excitations in the fundamental nonperturbative vertices of the theory. The dynamical formation of such excitations is controlled by a homogeneous linear Bethe-Salpeter equation, whose nontrivial solutions have been studied only in the Landau gauge. Here, the form of this integral equation is derived for general values of the gauge-fixing parameter, under a number of simplifying assumptions that reduce the degree of technical complexity. The kernel of this equation consists of fully dressed gluon propagators, for which recent lattice data are used as input, and of three-gluon vertices dressed by a single form factor, which is modeled by means of certain physically motivated Ansätze. The gauge-dependent terms contributing to this kernel impose considerable restrictions on the infrared behavior of the vertex form factor; specifically, only infrared finite Ansätze are compatible with the existence of nontrivial solutions. When such Ansätze are employed, the numerical study of the integral equation reveals a continuity in the type of solutions as one varies the gauge-fixing parameter, indicating a smooth departure from the Landau gauge. Instead, the logarithmically divergent form factor displaying the characteristic "zero crossing," while perfectly consistent in the Landau gauge, has to undergo a dramatic qualitative transformation away from it, in order to yield acceptable solutions. The possible implications of these results are briefly discussed.
Quantification of Covariance in Tropical Cyclone Activity across Teleconnected Basins
NASA Astrophysics Data System (ADS)
Tolwinski-Ward, S. E.; Wang, D.
2015-12-01
Rigorous statistical quantification of natural hazard covariance across regions has important implications for risk management, and is also of fundamental scientific interest. We present a multivariate Bayesian Poisson regression model for inferring the covariance in tropical cyclone (TC) counts across multiple ocean basins and across Saffir-Simpson intensity categories. Such covariability results from the influence of large-scale modes of climate variability on local environments that can alternately suppress or enhance TC genesis and intensification, and our model also simultaneously quantifies the covariance of TC counts with various climatic modes in order to deduce the source of inter-basin TC covariability. The model explicitly treats the time-dependent uncertainty in observed maximum sustained wind data, and hence the nominal intensity category of each TC. Differences in annual TC counts as measured by different agencies are also formally addressed. The probabilistic output of the model can be probed for probabilistic answers to such questions as: - Does the relationship between different categories of TCs differ statistically by basin? - Which climatic predictors have significant relationships with TC activity in each basin? - Are the relationships between counts in different basins conditionally independent given the climatic predictors, or are there other factors at play affecting inter-basin covariability? - How can a portfolio of insured property be optimized across space to minimize risk? Although we present results of our model applied to TCs, the framework is generalizable to covariance estimation between multivariate counts of natural hazards across regions and/or across peril types.
Action recognition from video using feature covariance matrices.
Guo, Kai; Ishwar, Prakash; Konrad, Janusz
2013-06-01
We propose a general framework for fast and accurate recognition of actions in video using empirical covariance matrices of features. A dense set of spatio-temporal feature vectors are computed from video to provide a localized description of the action, and subsequently aggregated in an empirical covariance matrix to compactly represent the action. Two supervised learning methods for action recognition are developed using feature covariance matrices. Common to both methods is the transformation of the classification problem in the closed convex cone of covariance matrices into an equivalent problem in the vector space of symmetric matrices via the matrix logarithm. The first method applies nearest-neighbor classification using a suitable Riemannian metric for covariance matrices. The second method approximates the logarithm of a query covariance matrix by a sparse linear combination of the logarithms of training covariance matrices. The action label is then determined from the sparse coefficients. Both methods achieve state-of-the-art classification performance on several datasets, and are robust to action variability, viewpoint changes, and low object resolution. The proposed framework is conceptually simple and has low storage and computational requirements making it attractive for real-time implementation.
Does aquaculture add resilience to the global food system?
Troell, Max; Naylor, Rosamond L; Metian, Marc; Beveridge, Malcolm; Tyedmers, Peter H; Folke, Carl; Arrow, Kenneth J; Barrett, Scott; Crépin, Anne-Sophie; Ehrlich, Paul R; Gren, Asa; Kautsky, Nils; Levin, Simon A; Nyborg, Karine; Österblom, Henrik; Polasky, Stephen; Scheffer, Marten; Walker, Brian H; Xepapadeas, Tasos; de Zeeuw, Aart
2014-09-16
Aquaculture is the fastest growing food sector and continues to expand alongside terrestrial crop and livestock production. Using portfolio theory as a conceptual framework, we explore how current interconnections between the aquaculture, crop, livestock, and fisheries sectors act as an impediment to, or an opportunity for, enhanced resilience in the global food system given increased resource scarcity and climate change. Aquaculture can potentially enhance resilience through improved resource use efficiencies and increased diversification of farmed species, locales of production, and feeding strategies. However, aquaculture's reliance on terrestrial crops and wild fish for feeds, its dependence on freshwater and land for culture sites, and its broad array of environmental impacts diminishes its ability to add resilience. Feeds for livestock and farmed fish that are fed rely largely on the same crops, although the fraction destined for aquaculture is presently small (∼4%). As demand for high-value fed aquaculture products grows, competition for these crops will also rise, as will the demand for wild fish as feed inputs. Many of these crops and forage fish are also consumed directly by humans and provide essential nutrition for low-income households. Their rising use in aquafeeds has the potential to increase price levels and volatility, worsening food insecurity among the most vulnerable populations. Although the diversification of global food production systems that includes aquaculture offers promise for enhanced resilience, such promise will not be realized if government policies fail to provide adequate incentives for resource efficiency, equity, and environmental protection.
Does aquaculture add resilience to the global food system?
Troell, Max; Naylor, Rosamond L.; Metian, Marc; Beveridge, Malcolm; Tyedmers, Peter H.; Folke, Carl; Arrow, Kenneth J.; Barrett, Scott; Crépin, Anne-Sophie; Ehrlich, Paul R.; Gren, Åsa; Kautsky, Nils; Levin, Simon A.; Nyborg, Karine; Österblom, Henrik; Polasky, Stephen; Scheffer, Marten; Walker, Brian H.; Xepapadeas, Tasos; de Zeeuw, Aart
2014-01-01
Aquaculture is the fastest growing food sector and continues to expand alongside terrestrial crop and livestock production. Using portfolio theory as a conceptual framework, we explore how current interconnections between the aquaculture, crop, livestock, and fisheries sectors act as an impediment to, or an opportunity for, enhanced resilience in the global food system given increased resource scarcity and climate change. Aquaculture can potentially enhance resilience through improved resource use efficiencies and increased diversification of farmed species, locales of production, and feeding strategies. However, aquaculture’s reliance on terrestrial crops and wild fish for feeds, its dependence on freshwater and land for culture sites, and its broad array of environmental impacts diminishes its ability to add resilience. Feeds for livestock and farmed fish that are fed rely largely on the same crops, although the fraction destined for aquaculture is presently small (∼4%). As demand for high-value fed aquaculture products grows, competition for these crops will also rise, as will the demand for wild fish as feed inputs. Many of these crops and forage fish are also consumed directly by humans and provide essential nutrition for low-income households. Their rising use in aquafeeds has the potential to increase price levels and volatility, worsening food insecurity among the most vulnerable populations. Although the diversification of global food production systems that includes aquaculture offers promise for enhanced resilience, such promise will not be realized if government policies fail to provide adequate incentives for resource efficiency, equity, and environmental protection. PMID:25136111
Climatic Drivers of Past Antarctic Ice Sheet Evolution Add Nonlinearly
NASA Astrophysics Data System (ADS)
Tigchelaar, M.; Timmermann, A.; Pollard, D.; Friedrich, T.; Heinemann, M.
2015-12-01
The Antarctic ice sheet has varied substantially in shape and volume in the past, with evidence for strong regional differences in evolution history. Recent observations of change in the Antarctic environment indicate that different regions respond differently to ongoing changes in global climate -- over the West Antarctic Ice Sheet strong increases in sub-shelf melt rates indicate a sensitivity to changes in ocean temperature and circulation, while in East Antarctica the mass balance is increasingly positive due to an increase in accumulation in response to rising temperatures. Modeling the long term evolution of the Antarctic ice sheet can help address questions about its regional sensitivity to external forcing. We have conducted experiments with an established ice sheet model over the last eight glacial cycles using spatially and temporally varying climate forcing from an EMIC. These simulations indicate a glacial-interglacial amplitude of ~11m SLE. Using a series of sensitivity experiments we address the dominant climatic forcing of this evolution. While sea level changes are the main driver of grounding line movement, they alone are not sufficient to explain the full glacial amplitude. Local insolation changes contribute to the initiation of terminations, while accumulation and sub-shelf melt changes feed back positively and negatively respectively onto the ice sheet evolution. This implies that climatic drivers add nonlinearly and the full spectrum of climate forcing needs to be considered when evaluating the sensitivity of the Antarctic ice sheet to past and future climate change.
Hawking radiation, covariant boundary conditions, and vacuum states
Banerjee, Rabin; Kulkarni, Shailesh
2009-04-15
The basic characteristics of the covariant chiral current
Covariance Generation Using CONRAD and SAMMY Computer Codes
Leal, Luiz C; Derrien, Herve; De Saint Jean, C; Noguere, G; Ruggieri, J M
2009-01-01
Covariance generation in the resolved resonance region can be generated using the computer codes CONRAD and SAMMY. These codes use formalisms derived from the R-matrix methodology together with the generalized least squares technique to obtain resonance parameter. In addition, resonance parameter covariance is also obtained. Results of covariance calculations for a simple case of the s-wave resonance parameters of 48Ti in the energy region 10-5 eV to 300 keV are compared. The retroactive approach included in CONRAD and SAMMY was used.
Reverse attenuation in interaction terms due to covariate measurement error.
Muff, Stefanie; Keller, Lukas F
2015-11-01
Covariate measurement error may cause biases in parameters of regression coefficients in generalized linear models. The influence of measurement error on interaction parameters has, however, only rarely been investigated in depth, and if so, attenuation effects were reported. In this paper, we show that also reverse attenuation of interaction effects may emerge, namely when heteroscedastic measurement error or sampling variances of a mismeasured covariate are present, which are not unrealistic scenarios in practice. Theoretical findings are illustrated with simulations. A Bayesian approach employing integrated nested Laplace approximations is suggested to model the heteroscedastic measurement error and covariate variances, and an application shows that the method is able to reveal approximately correct parameter estimates.
Covariance matrices and applications to the field of nuclear data
Smith, D.L.
1981-11-01
A student's introduction to covariance error analysis and least-squares evaluation of data is provided. It is shown that the basic formulas used in error propagation can be derived from a consideration of the geometry of curvilinear coordinates. Procedures for deriving covariances for scaler and vector functions of several variables are presented. Proper methods for reporting experimental errors and for deriving covariance matrices from these errors are indicated. The generalized least-squares method for evaluating experimental data is described. Finally, the use of least-squares techniques in data fitting applications is discussed. Specific examples of the various procedures are presented to clarify the concepts.
The importance of covariance in nuclear data uncertainty propagation studies
Benstead, J.
2012-07-01
A study has been undertaken to investigate what proportion of the uncertainty propagated through plutonium critical assembly calculations is due to the covariances between the fission cross section in different neutron energy groups. The uncertainties on k{sub eff} calculated show that the presence of covariances between the cross section in different neutron energy groups accounts for approximately 27-37% of the propagated uncertainty due to the plutonium fission cross section. This study also confirmed the validity of employing the sandwich equation, with associated sensitivity and covariance data, instead of a Monte Carlo sampling approach to calculating uncertainties for linearly varying systems. (authors)
Visualization of species pairwise associations: a case study of surrogacy in bird assemblages
Lane, Peter W; Lindenmayer, David B; Barton, Philip S; Blanchard, Wade; Westgate, Martin J
2014-01-01
Quantifying and visualizing species associations are important to many areas of ecology and conservation biology. Species networks are one way to analyze species associations, with a growing number of applications such as food webs, nesting webs, plant–animal mutualisms, and interlinked extinctions. We present a new method for assessing and visualizing patterns of co-occurrence of species. The method depicts interactions and associations in an analogous way with existing network diagrams for studying pollination and trophic interactions, but adds the assessment of sign, strength, and direction of the associations. This provides a distinct advantage over existing methods of quantifying and visualizing co-occurrence. We demonstrate the utility of our new approach by showing differences in associations among woodland bird species found in different habitats and by illustrating the way these can be interpreted in terms of underlying ecological mechanisms. Our new method is computationally feasible for large assemblages and provides readily interpretable effects with standard errors. It has wide applications for quantifying species associations within ecological communities, examining questions about particular species that occur with others, and how their associations can determine the structure and composition of communities. PMID:25473480
NASA Astrophysics Data System (ADS)
Ucisik, Melek N.; Dashti, Danial S.; Faver, John C.; Merz, Kenneth M.
2011-08-01
An energy expansion (binding energy decomposition into n-body interaction terms for n ≥ 2) to express the receptor-ligand binding energy for the fragmented HIV II protease-Indinavir system is described to address the role of cooperativity in ligand binding. The outcome of this energy expansion is compared to the total receptor-ligand binding energy at the Hartree-Fock, density functional theory, and semiempirical levels of theory. We find that the sum of the pairwise interaction energies approximates the total binding energy to ˜82% for HF and to >95% for both the M06-L density functional and PM6-DH2 semiempirical method. The contribution of the three-body interactions amounts to 18.7%, 3.8%, and 1.4% for HF, M06-L, and PM6-DH2, respectively. We find that the expansion can be safely truncated after n = 3. That is, the contribution of the interactions involving more than three parties to the total binding energy of Indinavir to the HIV II protease receptor is negligible. Overall, we find that the two-body terms represent a good approximation to the total binding energy of the system, which points to pairwise additivity in the present case. This basic principle of pairwise additivity is utilized in fragment-based drug design approaches and our results support its continued use. The present results can also aid in the validation of non-bonded terms contained within common force fields and in the correction of systematic errors in physics-based score functions.
Koizumi, Itsuro; Yamamoto, Shoichiro; Maekawa, Koji
2006-10-01
Isolation by distance is usually tested by the correlation of genetic and geographic distances separating all pairwise populations' combinations. However, this method can be significantly biased by only a few highly diverged populations and lose the information of individual population. To detect outlier populations and investigate the relative strengths of gene flow and genetic drift for each population, we propose a decomposed pairwise regression analysis. This analysis was applied to the well-described one-dimensional stepping-stone system of stream-dwelling Dolly Varden charr (Salvelinus malma). When genetic and geographic distances were plotted for all pairs of 17 tributary populations, the correlation was significant but weak (r(2) = 0.184). Seven outlier populations were determined based on the systematic bias of the regression residuals, followed by Akaike's information criteria. The best model, 10 populations included, showed a strong pattern of isolation by distance (r(2) = 0.758), suggesting equilibrium between gene flow and genetic drift in these populations. Each outlier population was also analysed by plotting pairwise genetic and geographic distances against the 10 nonoutlier populations, and categorized into one of the three patterns: strong genetic drift, genetic drift with a limited gene flow and a high level of gene flow. These classifications were generally consistent with a priori predictions for each population (physical barrier, population size, anthropogenic impacts). Combined the genetic analysis with field observations, Dolly Varden in this river appeared to form a mainland-island or source-sink metapopulation structure. The generality of the method will merit many types of spatial genetic analyses.
NASA Astrophysics Data System (ADS)
Chella, Federico; Pizzella, Vittorio; Zappasodi, Filippo; Nolte, Guido; Marzetti, Laura
2016-05-01
Brain cognitive functions arise through the coordinated activity of several brain regions, which actually form complex dynamical systems operating at multiple frequencies. These systems often consist of interacting subsystems, whose characterization is of importance for a complete understanding of the brain interaction processes. To address this issue, we present a technique, namely the bispectral pairwise interacting source analysis (biPISA), for analyzing systems of cross-frequency interacting brain sources when multichannel electroencephalographic (EEG) or magnetoencephalographic (MEG) data are available. Specifically, the biPISA makes it possible to identify one or many subsystems of cross-frequency interacting sources by decomposing the antisymmetric components of the cross-bispectra between EEG or MEG signals, based on the assumption that interactions are pairwise. Thanks to the properties of the antisymmetric components of the cross-bispectra, biPISA is also robust to spurious interactions arising from mixing artifacts, i.e., volume conduction or field spread, which always affect EEG or MEG functional connectivity estimates. This method is an extension of the pairwise interacting source analysis (PISA), which was originally introduced for investigating interactions at the same frequency, to the study of cross-frequency interactions. The effectiveness of this approach is demonstrated in simulations for up to three interacting source pairs and for real MEG recordings of spontaneous brain activity. Simulations show that the performances of biPISA in estimating the phase difference between the interacting sources are affected by the increasing level of noise rather than by the number of the interacting subsystems. The analysis of real MEG data reveals an interaction between two pairs of sources of central mu and beta rhythms, localizing in the proximity of the left and right central sulci.
AFCI-2.0 Neutron Cross Section Covariance Library
Herman, M.; Herman, M; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.
2011-03-01
The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R&D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78 structural
True covariance simulation of the EUVE update filter
NASA Technical Reports Server (NTRS)
Bar-Itzhack, I. Y.; Harman, R. R.
1990-01-01
This paper presents a covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft. The linearized dynamics and measurement equations of the error states are used in formulating the 'truth model' describing the real behavior of the systems involved. The 'design model' used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A 'true covariance analysis' has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.
Optimal Estimation and Rank Detection for Sparse Spiked Covariance Matrices.
Cai, Tony; Ma, Zongming; Wu, Yihong
2015-04-01
This paper considers a sparse spiked covariancematrix model in the high-dimensional setting and studies the minimax estimation of the covariance matrix and the principal subspace as well as the minimax rank detection. The optimal rate of convergence for estimating the spiked covariance matrix under the spectral norm is established, which requires significantly different techniques from those for estimating other structured covariance matrices such as bandable or sparse covariance matrices. We also establish the minimax rate under the spectral norm for estimating the principal subspace, the primary object of interest in principal component analysis. In addition, the optimal rate for the rank detection boundary is obtained. This result also resolves the gap in a recent paper by Berthet and Rigollet [2] where the special case of rank one is considered.
True covariance simulation of the EUVE update filter
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, R. R.
1989-01-01
A covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft is presented. The linearized dynamics and measurement equations of the error states are derived which constitute the truth model describing the real behavior of the systems involved. The design model used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A true covariance analysis has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.
Progress of Covariance Evaluation at the China Nuclear Data Center
Xu, R.; Zhang, Q.; Zhang, Y.; Liu, T.; Ge, Z.; Lu, H.; Sun, Z.; Yu, B.; Tang, G.
2015-01-15
Covariance evaluations at the China Nuclear Data Center focus on the cross sections of structural materials and actinides in the fast neutron energy range. In addition to the well-known Least-squares approach, a method based on the analysis of the sources of experimental uncertainties is especially introduced to generate a covariance matrix for a particular reaction for which multiple measurements are available. The scheme of the covariance evaluation flow is presented, and an example of n+{sup 90}Zr is given to illuminate the whole procedure. It is proven that the accuracy of measurements can be properly incorporated into the covariance and the long-standing small uncertainty problem can be avoided.
Covariance Matrix Evaluations for Independent Mass Fission Yields
Terranova, N.; Serot, O.; Archier, P.; De Saint Jean, C.
2015-01-15
Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yields variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
Cosmic shear covariance: the log-normal approximation
NASA Astrophysics Data System (ADS)
Hilbert, S.; Hartlap, J.; Schneider, P.
2011-12-01
Context. Accurate estimates of the errors on the cosmological parameters inferred from cosmic shear surveys require accurate estimates of the covariance of the cosmic shear correlation functions. Aims: We seek approximations to the cosmic shear covariance that are as easy to use as the common approximations based on normal (Gaussian) statistics, but yield more accurate covariance matrices and parameter errors. Methods: We derive expressions for the cosmic shear covariance under the assumption that the underlying convergence field follows log-normal statistics. We also derive a simplified version of this log-normal approximation by only retaining the most important terms beyond normal statistics. We use numerical simulations of weak lensing to study how well the normal, log-normal, and simplified log-normal approximations as well as empirical corrections to the normal approximation proposed in the literature reproduce shear covariances for cosmic shear surveys. We also investigate the resulting confidence regions for cosmological parameters inferred from such surveys. Results: We find that the normal approximation substantially underestimates the cosmic shear covariances and the inferred parameter confidence regions, in particular for surveys with small fields of view and large galaxy densities, but also for very wide surveys. In contrast, the log-normal approximation yields more realistic covariances and confidence regions, but also requires evaluating slightly more complicated expressions. However, the simplified log-normal approximation, although as simple as the normal approximation, yields confidence regions that are almost as accurate as those obtained from the log-normal approximation. The empirical corrections to the normal approximation do not yield more accurate covariances and confidence regions than the (simplified) log-normal approximation. Moreover, they fail to produce positive-semidefinite data covariance matrices in certain cases, rendering them
Santos, Andrés; López de Haro, Mariano; Fiumara, Giacomo; Saija, Franz
2015-06-14
The relevance of neglecting three- and four-body interactions in the coarse-grained version of the Asakura-Oosawa model is examined. A mapping between the first few virial coefficients of the binary nonadditive hard-sphere mixture representative of this model and those arising from the coarse-grained (pairwise) depletion potential approximation allows for a quantitative evaluation of the effect of such interactions. This turns out to be especially important for large size ratios and large reservoir polymer packing fractions.
Wang, Hailong; Cao, Leiming; Jing, Jietai
2017-01-01
We theoretically characterize the performance of the pairwise correlations (PCs) from multiple quantum correlated beams based on the cascaded four-wave mixing (FWM) processes. The presence of the PCs with quantum corre- lation in these systems can be verified by calculating the degree of intensity difference squeezing for any pair of all the output fields. The quantum correlation characteristics of all the PCs under different cascaded schemes are also discussed in detail and the repulsion effect between PCs in these cascaded FWM processes is theoretically predicted. Our results open the way for the classification and application of quantum states generated from the cascaded FWM processes. PMID:28071759
Explicitly covariant dispersion relations and self-induced transparency
NASA Astrophysics Data System (ADS)
Mahajan, S. M.; Asenjo, Felipe A.
2017-02-01
Explicitly covariant dispersion relations for a variety of plasma waves in unmagnetized and magnetized plasmas are derived in a systematic manner from a fully covariant plasma formulation. One needs to invoke relatively little known invariant combinations constructed from the ambient electromagnetic fields and the wave vector to accomplish the program. The implication of this work applied to the self-induced transparency effect is discussed. Some problems arising from the inconsistent use of relativity are pointed out.
New capabilities for processing covariance data in resonance region
Wiarda, D.; Dunn, M. E.; Greene, N. M.; Larson, N. M.; Leal, L. C.
2006-07-01
The AMPX [1] code system is a modular system of FORTRAN computer programs that relate to nuclear analysis with a primary emphasis on tasks associated with the production and use of multi group and continuous energy cross sections. The module PUFF-III within this code system handles the creation of multi group covariance data from ENDF information. The resulting covariances are saved in COVERX format [2]. We recently expanded the capabilities of PUFF-III to include full handling of covariance data in the resonance region (resolved as well as unresolved). The new program handles all resonance covariance formats in File 32 except for the long-range covariance sub sections. The new program has been named PUFF-IV. To our knowledge, PUFF-IV is the first processing code that can address both the new ENDF format for resolved resonance parameters and the new ENDF 'compact' covariance format. The existing code base was rewritten in Fortran 90 to allow for a more modular design. Results are identical between the new and old versions within rounding errors, where applicable. Automatic test cases have been added to ensure that consistent results are generated across computer systems. (authors)
Adjusting power for a baseline covariate in linear models
Glueck, Deborah H.; Muller, Keith E.
2009-01-01
SUMMARY The analysis of covariance provides a common approach to adjusting for a baseline covariate in medical research. With Gaussian errors, adding random covariates does not change either the theory or the computations of general linear model data analysis. However, adding random covariates does change the theory and computation of power analysis. Many data analysts fail to fully account for this complication in planning a study. We present our results in five parts. (i) A review of published results helps document the importance of the problem and the limitations of available methods. (ii) A taxonomy for general linear multivariate models and hypotheses allows identifying a particular problem. (iii) We describe how random covariates introduce the need to consider quantiles and conditional values of power. (iv) We provide new exact and approximate methods for power analysis of a range of multivariate models with a Gaussian baseline covariate, for both small and large samples. The new results apply to the Hotelling-Lawley test and the four tests in the “univariate” approach to repeated measures (unadjusted, Huynh-Feldt, Geisser-Greenhouse, Box). The techniques allow rapid calculation and an interactive, graphical approach to sample size choice. (v) Calculating power for a clinical trial of a treatment for increasing bone density illustrates the new methods. We particularly recommend using quantile power with a new Satterthwaite-style approximation. PMID:12898543
[Clinical research XIX. From clinical judgment to analysis of covariance].
Pérez-Rodríguez, Marcela; Palacios-Cruz, Lino; Moreno, Jorge; Rivas-Ruiz, Rodolfo; Talavera, Juan O
2014-01-01
The analysis of covariance (ANCOVA) is based on the general linear models. This technique involves a regression model, often multiple, in which the outcome is presented as a continuous variable, the independent variables are qualitative or are introduced into the model as dummy or dichotomous variables, and factors for which adjustment is required (covariates) can be in any measurement level (i.e. nominal, ordinal or continuous). The maneuvers can be entered into the model as 1) fixed effects, or 2) random effects. The difference between fixed effects and random effects depends on the type of information we want from the analysis of the effects. ANCOVA effect separates the independent variables from the effect of co-variables, i.e., corrects the dependent variable eliminating the influence of covariates, given that these variables change in conjunction with maneuvers or treatments, affecting the outcome variable. ANCOVA should be done only if it meets three assumptions: 1) the relationship between the covariate and the outcome is linear, 2) there is homogeneity of slopes, and 3) the covariate and the independent variable are independent from each other.
Large Covariance Estimation by Thresholding Principal Orthogonal Complements.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2013-09-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.
Real-time probabilistic covariance tracking with efficient model update.
Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li
2012-05-01
The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.
A three domain covariance framework for EEG/MEG data.
Roś, Beata P; Bijma, Fetsje; de Gunst, Mathisca C M; de Munck, Jan C
2015-10-01
In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. Our covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, like in combined EEG-fMRI experiments in which the correlation between EEG and fMRI signals is investigated. We use a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. We apply our method to real EEG and MEG data sets.
Large Covariance Estimation by Thresholding Principal Orthogonal Complements
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088
Alterations in Anatomical Covariance in the Prematurely Born.
Scheinost, Dustin; Kwon, Soo Hyun; Lacadie, Cheryl; Vohr, Betty R; Schneider, Karen C; Papademetris, Xenophon; Constable, R Todd; Ment, Laura R
2015-10-22
Preterm (PT) birth results in long-term alterations in functional and structural connectivity, but the related changes in anatomical covariance are just beginning to be explored. To test the hypothesis that PT birth alters patterns of anatomical covariance, we investigated brain volumes of 25 PTs and 22 terms at young adulthood using magnetic resonance imaging. Using regional volumetrics, seed-based analyses, and whole brain graphs, we show that PT birth is associated with reduced volume in bilateral temporal and inferior frontal lobes, left caudate, left fusiform, and posterior cingulate for prematurely born subjects at young adulthood. Seed-based analyses demonstrate altered patterns of anatomical covariance for PTs compared with terms. PTs exhibit reduced covariance with R Brodmann area (BA) 47, Broca's area, and L BA 21, Wernicke's area, and white matter volume in the left prefrontal lobe, but increased covariance with R BA 47 and left cerebellum. Graph theory analyses demonstrate that measures of network complexity are significantly less robust in PTs compared with term controls. Volumes in regions showing group differences are significantly correlated with phonological awareness, the fundamental basis for reading acquisition, for the PTs. These data suggest both long-lasting and clinically significant alterations in the covariance in the PTs at young adulthood.
The performance analysis based on SAR sample covariance matrix.
Erten, Esra
2012-01-01
Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR) context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.
Gaussian covariance matrices for anisotropic galaxy clustering measurements
NASA Astrophysics Data System (ADS)
Grieb, Jan Niklas; Sánchez, Ariel G.; Salazar-Albornoz, Salvador; Dalla Vecchia, Claudio
2016-04-01
Measurements of the redshift-space galaxy clustering have been a prolific source of cosmological information in recent years. Accurate covariance estimates are an essential step for the validation of galaxy clustering models of the redshift-space two-point statistics. Usually, only a limited set of accurate N-body simulations is available. Thus, assessing the data covariance is not possible or only leads to a noisy estimate. Further, relying on simulated realizations of the survey data means that tests of the cosmology dependence of the covariance are expensive. With these points in mind, this work presents a simple theoretical model for the linear covariance of anisotropic galaxy clustering observations with synthetic catalogues. Considering the Legendre moments (`multipoles') of the two-point statistics and projections into wide bins of the line-of-sight parameter (`clustering wedges'), we describe the modelling of the covariance for these anisotropic clustering measurements for galaxy samples with a trivial geometry in the case of a Gaussian approximation of the clustering likelihood. As main result of this paper, we give the explicit formulae for Fourier and configuration space covariance matrices. To validate our model, we create synthetic halo occupation distribution galaxy catalogues by populating the haloes of an ensemble of large-volume N-body simulations. Using linear and non-linear input power spectra, we find very good agreement between the model predictions and the measurements on the synthetic catalogues in the quasi-linear regime.
Modelling the random effects covariance matrix in longitudinal data.
Daniels, Michael J; Zhao, Yan D
2003-05-30
A common class of models for longitudinal data are random effects (mixed) models. In these models, the random effects covariance matrix is typically assumed constant across subject. However, in many situations this matrix may differ by measured covariates. In this paper, we propose an approach to model the random effects covariance matrix by using a special Cholesky decomposition of the matrix. In particular, we will allow the parameters that result from this decomposition to depend on subject-specific covariates and also explore ways to parsimoniously model these parameters. An advantage of this parameterization is that there is no concern about the positive definiteness of the resulting estimator of the covariance matrix. In addition, the parameters resulting from this decomposition have a sensible interpretation. We propose fully Bayesian modelling for which a simple Gibbs sampler can be implemented to sample from the posterior distribution of the parameters. We illustrate these models on data from depression studies and examine the impact of heterogeneity in the covariance matrix on estimation of both fixed and random effects.
Embedding objects during 3D printing to add new functionalities.
Yuen, Po Ki
2016-07-01
A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication
Bond, Stephen D.
2014-01-01
The availability of efficient algorithms for long-range pairwise interactions is central to the success of numerous applications, ranging in scale from atomic-level modeling of materials to astrophysics. This report focuses on the implementation and analysis of the multilevel summation method for approximating long-range pairwise interactions. The computational cost of the multilevel summation method is proportional to the number of particles, N, which is an improvement over FFTbased methods whos cost is asymptotically proportional to N logN. In addition to approximating electrostatic forces, the multilevel summation method can be use to efficiently approximate convolutions with long-range kernels. As an application, we apply the multilevel summation method to a discretized integral equation formulation of the regularized generalized Poisson equation. Numerical results are presented using an implementation of the multilevel summation method in the LAMMPS software package. Preliminary results show that the computational cost of the method scales as expected, but there is still a need for further optimization.
Ghita, Ovidiu; Dietlmeier, Julia; Whelan, Paul F
2014-10-01
In this paper, we investigate the segmentation of closed contours in subcellular data using a framework that primarily combines the pairwise affinity grouping principles with a graph partitioning contour searching approach. One salient problem that precluded the application of these methods to large scale segmentation problems is the onerous computational complexity required to generate comprehensive representations that include all pairwise relationships between all pixels in the input data. To compensate for this problem, a practical solution is to reduce the complexity of the input data by applying an over-segmentation technique prior to the application of the computationally demanding strands of the segmentation process. This approach opens the opportunity to build specific shape and intensity models that can be successfully employed to extract the salient structures in the input image which are further processed to identify the cycles in an undirected graph. The proposed framework has been applied to the segmentation of mitochondria membranes in electron microscopy data which are characterized by low contrast and low signal-to-noise ratio. The algorithm has been quantitatively evaluated using two datasets where the segmentation results have been compared with the corresponding manual annotations. The performance of the proposed algorithm has been measured using standard metrics, such as precision and recall, and the experimental results indicate a high level of segmentation accuracy.
Kauko, Otto; Laajala, Teemu Daniel; Jumppanen, Mikael; Hintsanen, Petteri; Suni, Veronika; Haapaniemi, Pekka; Corthals, Garry; Aittokallio, Tero; Westermarck, Jukka; Imanishi, Susumu Y.
2015-01-01
Hyperactivated RAS drives progression of many human malignancies. However, oncogenic activity of RAS is dependent on simultaneous inactivation of protein phosphatase 2A (PP2A) activity. Although PP2A is known to regulate some of the RAS effector pathways, it has not been systematically assessed how these proteins functionally interact. Here we have analyzed phosphoproteomes regulated by either RAS or PP2A, by phosphopeptide enrichment followed by mass-spectrometry-based label-free quantification. To allow data normalization in situations where depletion of RAS or PP2A inhibitor CIP2A causes a large uni-directional change in the phosphopeptide abundance, we developed a novel normalization strategy, named pairwise normalization. This normalization is based on adjusting phosphopeptide abundances measured before and after the enrichment. The superior performance of the pairwise normalization was verified by various independent methods. Additionally, we demonstrate how the selected normalization method influences the downstream analyses and interpretation of pathway activities. Consequently, bioinformatics analysis of RAS and CIP2A regulated phosphoproteomes revealed a significant overlap in their functional pathways. This is most likely biologically meaningful as we observed a synergistic survival effect between CIP2A and RAS expression as well as KRAS activating mutations in TCGA pan-cancer data set, and synergistic relationship between CIP2A and KRAS depletion in colony growth assays. PMID:26278961
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the
Covariate-adjusted confidence interval for the intraclass correlation coefficient.
Shoukri, Mohamed M; Donner, Allan; El-Dali, Abdelmoneim
2013-09-01
A crucial step in designing a new study is to estimate the required sample size. For a design involving cluster sampling, the appropriate sample size depends on the so-called design effect, which is a function of the average cluster size and the intracluster correlation coefficient (ICC). It is well-known that under the framework of hierarchical and generalized linear models, a reduction in residual error may be achieved by including risk factors as covariates. In this paper we show that the covariate design, indicating whether the covariates are measured at the cluster level or at the within-cluster subject level affects the estimation of the ICC, and hence the design effect. Therefore, the distinction between these two types of covariates should be made at the design stage. In this paper we use the nested-bootstrap method to assess the accuracy of the estimated ICC for continuous and binary response variables under different covariate structures. The codes of two SAS macros are made available by the authors for interested readers to facilitate the construction of confidence intervals for the ICC. Moreover, using Monte Carlo simulations we evaluate the relative efficiency of the estimators and evaluate the accuracy of the coverage probabilities of a 95% confidence interval on the population ICC. The methodology is illustrated using a published data set of blood pressure measurements taken on family members.
Covariant Lyapunov vectors of chaotic Rayleigh-Bénard convection
NASA Astrophysics Data System (ADS)
Xu, M.; Paul, M. R.
2016-06-01
We explore numerically the high-dimensional spatiotemporal chaos of Rayleigh-Bénard convection using covariant Lyapunov vectors. We integrate the three-dimensional and time-dependent Boussinesq equations for a convection layer in a shallow square box geometry with an aspect ratio of 16 for very long times and for a range of Rayleigh numbers. We simultaneously integrate many copies of the tangent space equations in order to compute the covariant Lyapunov vectors. The dynamics explored has fractal dimensions of 20 ≲Dλ≲50 , and we compute on the order of 150 covariant Lyapunov vectors. We use the covariant Lyapunov vectors to quantify the degree of hyperbolicity of the dynamics and the degree of Oseledets splitting and to explore the temporal and spatial dynamics of the Lyapunov vectors. Our results indicate that the chaotic dynamics of Rayleigh-Bénard convection is nonhyperbolic for all of the Rayleigh numbers we have explored. Our results yield that the entire spectrum of covariant Lyapunov vectors that we have computed are tangled as indicated by near tangencies with neighboring vectors. A closer look at the spatiotemporal features of the Lyapunov vectors suggests contributions from structures at two different length scales with differing amounts of localization.
WAIS-IV subtest covariance structure: conceptual and statistical considerations.
Ward, L Charles; Bergman, Maria A; Hebert, Katina R
2012-06-01
D. Wechsler (2008b) reported confirmatory factor analyses (CFAs) with standardization data (ages 16-69 years) for 10 core and 5 supplemental subtests from the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). Analyses of the 15 subtests supported 4 hypothesized oblique factors (Verbal Comprehension, Working Memory, Perceptual Reasoning, and Processing Speed) but also revealed unexplained covariance between Block Design and Visual Puzzles (Perceptual Reasoning subtests). That covariance was not included in the final models. Instead, a path was added from Working Memory to Figure Weights (Perceptual Reasoning subtest) to improve fit and achieve a desired factor pattern. The present research with the same data (N = 1,800) showed that the path from Working Memory to Figure Weights increases the association between Working Memory and Matrix Reasoning. Specifying both paths improves model fit and largely eliminates unexplained covariance between Block Design and Visual Puzzles but with the undesirable consequence that Figure Weights and Matrix Reasoning are equally determined by Perceptual Reasoning and Working Memory. An alternative 4-factor model was proposed that explained theory-implied covariance between Block Design and Visual Puzzles and between Arithmetic and Figure Weights while maintaining compatibility with WAIS-IV Index structure. The proposed model compared favorably with a 5-factor model based on Cattell-Horn-Carroll theory. The present findings emphasize that covariance model comparisons should involve considerations of conceptual coherence and theoretical adherence in addition to statistical fit.
A new look at Lorentz-covariant loop quantum gravity
NASA Astrophysics Data System (ADS)
Geiller, Marc; Lachièze-Rey, Marc; Noui, Karim
2011-08-01
In this work, we study the classical and quantum properties of the unique commutative Lorentz-covariant connection for loop quantum gravity. This connection has been found after solving the second-class constraints inherited from the canonical analysis of the Holst action without the time gauge. We show that it has the property of lying in the conjugacy class of a pure su(2) connection, a result which enables one to construct the kinematical Hilbert space of the Lorentz-covariant theory in terms of the usual SU(2) spin-network states. Furthermore, we show that there is a unique Lorentz-covariant electric field, up to trivial and natural equivalence relations. The Lorentz-covariant electric field transforms under the adjoint action of the Lorentz group, and the associated Casimir operators are shown to be proportional to the area density. This gives a very interesting algebraic interpretation of the area. Finally, we show that the action of the surface operator on the Lorentz-covariant holonomies reproduces exactly the usual discrete SU(2) spectrum of time-gauge loop quantum gravity. In other words, the use of the time gauge does not introduce anomalies in the quantum theory.
Newton law in covariant unimodular F(R) gravity
NASA Astrophysics Data System (ADS)
Nojiri, S.; Odintsov, S. D.; Oikonomou, V. K.
2016-09-01
We investigate the Newton law in the unimodular F(R) gravity. In the standard F(R) gravity, due to the extra scalar mode, there often appear the large corrections to the Newton law and such models are excluded by the experiments and/or the observations. In the unimodular F(R) gravity, however, the extra scalar mode become not to be dynamical due to the unimodular constraint and there is not any correction to the Newton law. Even in the unimodular Einstein gravity, the Newton law is reproduced but the mechanism is a little bit different from that in the unimodular F(R) gravity. We also investigate the unimodular F(R) gravity in the covariant formulation. In the covariant formulation, we include the three-form field. We show that the three-form field could not have any unwanted property, like ghost nor correction to the Newton law. In the covariant formulation, however, the above extra scalar mode becomes dynamical and could give a correction to the Newton law. We also show that there are no difference in the Friedmann-Robertson-Walker (FRW) dynamics in the non-covariant and covariant formulation.
ERIC Educational Resources Information Center
Forster, Fred
Statistical methods are described for diagnosing and treating three important problems in covariate tests of significance: curvilinearity, covariable effectiveness, and treatment-covariable interaction. Six major assumptions, prerequisites for covariate procedure, are discussed in detail: (1) normal distribution, (2) homogeneity of variances, (3)…
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 13 2012-07-01 2012-07-01 false How do I determine the add-on control... Emission Rate with Add-on Controls Option § 63.3966 How do I determine the add-on control device emission... the add-on control device emission destruction or removal efficiency as part of the performance...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 13 2013-07-01 2012-07-01 true How do I determine the add-on control... with Add-on Controls Option § 63.4566 How do I determine the add-on control device emission destruction... add-on control device emission destruction or removal efficiency as part of the performance...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 13 2013-07-01 2012-07-01 true How do I determine the add-on control... Emission Rate with Add-on Controls Option § 63.3966 How do I determine the add-on control device emission... the add-on control device emission destruction or removal efficiency as part of the performance...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 13 2014-07-01 2014-07-01 false How do I determine the add-on control... with Add-on Controls Option § 63.4566 How do I determine the add-on control device emission destruction... add-on control device emission destruction or removal efficiency as part of the performance...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 13 2012-07-01 2012-07-01 false How do I determine the add-on control... with Add-on Controls Option § 63.4566 How do I determine the add-on control device emission destruction... add-on control device emission destruction or removal efficiency as part of the performance...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 13 2014-07-01 2014-07-01 false How do I determine the add-on control... Emission Rate with Add-on Controls Option § 63.3966 How do I determine the add-on control device emission... the add-on control device emission destruction or removal efficiency as part of the performance...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 12 2011-07-01 2009-07-01 true How do I determine the add-on control... with Add-on Controls Option § 63.3966 How do I determine the add-on control device emission destruction... add-on control device emission destruction or removal efficiency as part of the performance...
FPGA-based Hyperspectral Covariance Coprocessor for Size, Weight, and Power Constrained Platforms
NASA Astrophysics Data System (ADS)
Kusinsky, David Alan
, respectively. The coprocessor requires 45% less energy during processing. This research shows that FPGA-based acceleration of HSI data covariance computations is promising from a size, weight, and power perspective. Significant unused FPGA resources in the coprocessor's FPGA can be used to add additional HSI data processing operations and direct HSI camera interfacing in the future.
NASA Astrophysics Data System (ADS)
Tarpine, Ryan; Lam, Fumei; Istrail, Sorin
We present results on two classes of problems. The first result addresses the long standing open problem of finding unifying principles for Linkage Disequilibrium (LD) measures in population genetics (Lewontin 1964 [10], Hedrick 1987 [8], Devlin and Risch 1995 [5]). Two desirable properties have been proposed in the extensive literature on this topic and the mutual consistency between these properties has remained at the heart of statistical and algorithmic difficulties with haplotype and genome-wide association study analysis. The first axiom is (1) The ability to extend LD measures to multiple loci as a conservative extension of pairwise LD. All widely used LD measures are pairwise measures. Despite significant attempts, it is not clear how to naturally extend these measures to multiple loci, leading to a "curse of the pairwise". The second axiom is (2) The Interpretability of Intermediate Values. In this paper, we resolve this mutual consistency problem by introducing a new LD measure, directed informativeness overrightarrow{I} (the directed graph theoretic counterpart of the informativeness measure introduced by Halldorsson et al. [6]) and show that it satisfies both of the above axioms. We also show the maximum informative subset of tagging SNPs based on overrightarrow{I} can be computed exactly in polynomial time for realistic genome-wide data. Furthermore, we present polynomial time algorithms for optimal genome-wide tagging SNPs selection for a number of commonly used LD measures, under the bounded neighborhood assumption for linked pairs of SNPs. One problem in the area is the search for a quality measure for tagging SNPs selection that unifies the LD-based methods such as LD-select (implemented in Tagger, de Bakker et al. 2005 [4], Carlson et al. 2004 [3]) and the information-theoretic ones such as informativeness. We show that the objective function of the LD-select algorithm is the Minimal Dominating Set (MDS) on r 2-SNP graphs and show that we can
Measuring Narcissism within Add Health: The Development and Validation of a New Scale
ERIC Educational Resources Information Center
Davis, Mark S.; Brunell, Amy B.
2012-01-01
This study reports the development of a measure of narcissism within the National Longitudinal Study of Adolescent Health (Add Health) data set. In Study 1, items were selected from Wave III to form the Add Health Narcissism Scale (AHNS). These were factor analyzed, yielding a single factor comprised of five subscales. We correlated the AHNS and…
40 CFR Table 1b to Subpart Dddd of... - Add-on Control Systems Compliance Options
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 13 2013-07-01 2012-07-01 true Add-on Control Systems Compliance Options 1B Table 1B to Subpart DDDD of Part 63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Products Pt. 63, Subpt. DDDD, Table 1B Table 1B to Subpart DDDD of Part 63—Add-on Control...
40 CFR Table 1b to Subpart Dddd of... - Add-on Control Systems Compliance Options
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 13 2014-07-01 2014-07-01 false Add-on Control Systems Compliance Options 1B Table 1B to Subpart DDDD of Part 63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Products Pt. 63, Subpt. DDDD, Table 1B Table 1B to Subpart DDDD of Part 63—Add-on Control...
Cognitive Control and Attentional Selection in Adolescents with ADHD versus ADD
ERIC Educational Resources Information Center
Carr, Laurie; Henderson, John; Nigg, Joel T.
2010-01-01
An important research question is whether Attention Deficit Hyperactivity Disorder (ADHD) is related to early- or late-stage attentional control mechanisms and whether this differentiates a nonhyperactive subtype (ADD). This question was addressed in a sample of 145 ADD/ADHD and typically developing comparison adolescents (aged 13-17). Attentional…
75 FR 73075 - Notice of Motion To Add Exhibit to Petition for Declaratory Order and Complaint
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-29
... Energy Regulatory Commission Notice of Motion To Add Exhibit to Petition for Declaratory Order and... of Pella, Iowa (Complainant) filed a motion to add a document as Exhibit P-28 to its July 2, 2010... document is added to a subscribed docket(s). For assistance with any FERC Online service, please...
40 CFR Table 1b to Subpart Dddd of... - Add-on Control Systems Compliance Options
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 13 2012-07-01 2012-07-01 false Add-on Control Systems Compliance Options 1B Table 1B to Subpart DDDD of Part 63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Products Pt. 63, Subpt. DDDD, Table 1B Table 1B to Subpart DDDD of Part 63—Add-on Control...
7 CFR 360.500 - Petitions to add a taxon to the noxious weed list.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 5 2013-01-01 2013-01-01 false Petitions to add a taxon to the noxious weed list. 360... to add a taxon to the noxious weed list. A person may petition the Administrator to have a taxon added to the noxious weeds lists in § 360.200. Details of the petitioning process for adding a taxon...
ERIC Educational Resources Information Center
Matazow, Gail S.; Hynd, George W.
Children with Attention Deficit Disorder (ADD) often exhibit problems in visual spatial perception, math achievement, and social skills, and it has been postulated that this constellation of behaviors may constitute Right Hemisphere Deficit Syndrome (RHDS). This study examined 21 children with attention deficit disorder with hyperactivity (ADD/H),…
40 CFR Table 1b to Subpart Dddd of... - Add-on Control Systems Compliance Options
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 12 2011-07-01 2009-07-01 true Add-on Control Systems Compliance.... 63, Subpt. DDDD, Table 1B Table 1B to Subpart DDDD of Part 63—Add-on Control Systems Compliance... compliance options by using an emissions control system . . . Fiberboard mat dryer heated zones (at...
ERIC Educational Resources Information Center
Flick, Grad L.
The first step in dealing with an attention deficit disorder (ADD/ADHD) child's difficult behavior is to understand its origins. This book presents behavior management techniques to help parents care for their ADD child while ensuring that the child continues to develop positive, healthy self-esteem. The guide shows how to: (1) ensure an accurate…
Prevalence of Aggression and Defiance in Children with ADD/ADHD Tendencies
ERIC Educational Resources Information Center
Hill, Janella
2011-01-01
Attention Deficit Disorder (ADD) and Attention Deficit Hyperactivity Disorder (ADHD) appear to have become more prevalent in the past few years. Many children who display ADD/ADHD tendencies also display behaviors which cause problems in a classroom setting. Considering the fact that these behaviors could be displayed by the student population as…
20 Ways To...Collaborate with Families of Children with ADD.
ERIC Educational Resources Information Center
Mathur, Smita; Smith, Robin M.
2003-01-01
Twenty ideas for collaborating with families of children with attention deficit disorder (ADD) include: (1) providing information about ADD to families; (2) planning meetings to accommodate family members; (3) addressing the language needs of families; (4) helping family members develop their advocacy skills; and (5) helping families network with…
It All ADDs Up: Help for Children with Attention Deficit Disorders.
ERIC Educational Resources Information Center
Anderson, Marilyn
2000-01-01
Suggestions for working with students with attention deficit disorders (ADD) include: seek medical evaluation; discuss treatment options with the physician and school; communicate with the child's teachers; provide feedback, praise, and consequences; help the child develop social skills; understand when ADD with hyperactivity is disabling; know…
Test Review: Brown Attention-Deficit Disorder Scales and Brown ADD Diagnostic Forms.
ERIC Educational Resources Information Center
Muniz, Linda
1996-01-01
This article on the Brown Attention-Deficit Disorder (ADD) Scale for Adolescents and the Brown ADD Scale for Adults describes the tests' recommended uses, administration, components, standardization, reliability, and validity. The self-report measures are designed for initial screening, as one part of a comprehensive diagnostic assessment, and for…
Evaluation of Tungsten Nuclear Reaction Data with Covariances
Trkov, A. Capote, R.; Kodeli, I.; Leal, L.
2008-12-15
As a follow-up of the work presented at the ND-2007 conference in Nice, additional fast reactor benchmarks were analyzed. Adjustment to the cross sections in the keV region was necessary. Evaluated neutron cross section data files for {sup 180,182,183,184,186}W isotopes were produced. Covariances were generated for all isotopes except {sup 180}W. In the resonance range the retro-active method was used. Above the resolved resonance range the covariance prior was generated by the Monte Carlo technique from nuclear model calculations with the Empire-II code. Experimental data were taken into account through the GANDR system using the generalized least-squares technique. Introducing experimental data results in relatively small changes in the cross sections, but greatly constrains the uncertainties. The covariance files are currently undergoing testing.
Evaluation of Tungsten Nuclear Reaction Data with Covariances
Trkov, A.; Capote, R.; Kodeli, I.; Leal, Luiz C.
2008-12-01
As a follow-up of the work presented at the ND-2007 conference in Nice, additional fast reactor benchmarks were analyzed. Adjustment to the cross sections in the keV region was necessary. Evaluated neutron cross section data files for 180,182,183,184,186W isotopes were produced. Covariances were generated for all isotopes except 180W. In the resonance range the retro-active method was used. Above the resolved resonance range the covariance prior was generated by the Monte Carlo technique from nuclear model calculations with the Empire-II code. Experimental data were taken into account through the GANDR system using the generalized least-squares technique. Introducing experimental data results in relatively small changes in the cross sections, but greatly constrains the uncertainties. The covariance files are currently undergoing testing.
Adaptive Covariance Inflation in a Multi-Resolution Assimilation Scheme
NASA Astrophysics Data System (ADS)
Hickmann, K. S.; Godinez, H. C.
2015-12-01
When forecasts are performed using modern data assimilation methods observation and model error can be scaledependent. During data assimilation the blending of error across scales can result in model divergence since largeerrors at one scale can be propagated across scales during the analysis step. Wavelet based multi-resolution analysiscan be used to separate scales in model and observations during the application of an ensemble Kalman filter. However,this separation is done at the cost of implementing an ensemble Kalman filter at each scale. This presents problemswhen tuning the covariance inflation parameter at each scale. We present a method to adaptively tune a scale dependentcovariance inflation vector based on balancing the covariance of the innovation and the covariance of observations ofthe ensemble. Our methods are demonstrated on a one dimensional Kuramoto-Sivashinsky (K-S) model known todemonstrate non-linear interactions between scales.
Realistic Covariance Prediction for the Earth Science Constellation
NASA Technical Reports Server (NTRS)
Duncan, Matthew; Long, Anne
2006-01-01
Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.
Realistic Covariance Prediction For the Earth Science Constellations
NASA Technical Reports Server (NTRS)
Duncan, Matthew; Long, Anne
2006-01-01
Routine satellite operations for the Earth Science Constellations (ESC) include collision risk assessment between members of the constellations and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed via Monte Carlo techniques as well as numerically integrating relative probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by NASA Goddard's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the ESC satellites: Aqua, Aura, and Terra
A Lorentz-Covariant Connection for Canonical Gravity
NASA Astrophysics Data System (ADS)
Geiller, Marc; Lachièze-Rey, Marc; Noui, Karim; Sardelli, Francesco
2011-08-01
We construct a Lorentz-covariant connection in the context of first order canonical gravity with non-vanishing Barbero-Immirzi parameter. To do so, we start with the phase space formulation derived from the canonical analysis of the Holst action in which the second class constraints have been solved explicitly. This allows us to avoid the use of Dirac brackets. In this context, we show that there is a ''unique'' Lorentz-covariant connection which is commutative in the sense of the Poisson bracket, and which furthermore agrees with the connection found by Alexandrov using the Dirac bracket. This result opens a new way toward the understanding of Lorentz-covariant loop quantum gravity.
Data Covariances from R-Matrix Analyses of Light Nuclei
Hale, G.M. Paris, M.W.
2015-01-15
After first reviewing the parametric description of light-element reactions in multichannel systems using R-matrix theory and features of the general LANL R-matrix analysis code EDA, we describe how its chi-square minimization procedure gives parameter covariances. This information is used, together with analytically calculated sensitivity derivatives, to obtain cross section covariances for all reactions included in the analysis by first-order error propagation. Examples are given of the covariances obtained for systems with few resonances ({sup 5}He) and with many resonances ({sup 13}C ). We discuss the prevalent problem of this method leading to cross section uncertainty estimates that are unreasonably small for large data sets. The answer to this problem appears to be using parameter confidence intervals in place of standard errors.
Fu, C.Y.; Hetrick, D.M.
1982-01-01
Recent ratio data, with carefully evaluated covariances, were combined with eleven of the ENDF/B-V dosimetry cross sections using the generalized least-squares method. The purpose was to improve these evaluated cross sections and covariances, as well as to generate values for the cross-reaction covariances. The results represent improved cross sections as well as realistic and usable covariances. The latter are necessary for meaningful intergral-differential comparisons and for spectrum unfolding.
Walsh, Stephen J.; Tardiff, Mark F.
2007-10-01
Removing background from hyperspectral scenes is a common step in the process of searching for materials of interest. Some approaches to background subtraction use spectral library data and require invertible covariance matrices for each member of the library. This is challenging because the covariance matrix can be calculated but standard methods for estimating the inverse requires that the data set for each library member have many more spectral measurements than spectral channels, which is rarely the case. An alternative approach is called shrinkage estimation. This method is investigated as an approach to providing an invertible covariance matrix estimate in the case where the number of spectral measurements is less than the number of spectral channels. The approach is an analytic method for arriving at a target matrix and the shrinkage parameter that modify the existing covariance matrix for the data to make it invertible. The theory is discussed to develop different estimates. The resulting estimates are computed and inspected on a set of hyperspectral data. This technique shows some promise for arriving at an invertible covariance estimate for small hyperspectral data sets.
Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem.
Katsevich, E; Katsevich, A; Singer, A
2015-01-22
In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise.
Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*
Katsevich, E.; Katsevich, A.; Singer, A.
2015-01-01
In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132
Neutron Cross Section Covariances for Structural Materials and Fission Products
NASA Astrophysics Data System (ADS)
Hoblit, S.; Cho, Y.-S.; Herman, M.; Mattoon, C. M.; Mughabghab, S. F.; Obložinský, P.; Pigni, M. T.; Sonzogni, A. A.
2011-12-01
We describe neutron cross section covariances for 78 structural materials and fission products produced for the new US evaluated nuclear reaction library ENDF/B-VII.1. Neutron incident energies cover full range from 10 eV to 20 MeV and covariances are primarily provided for capture, elastic and inelastic scattering as well as (n,2n). The list of materials follows priorities defined by the Advanced Fuel Cycle Initiative, the major application being data adjustment for advanced fast reactor systems. Thus, in addition to 28 structural materials and 49 fission products, the list includes also 23Na which is important fast reactor coolant. Due to extensive amount of materials, we adopted a variety of methodologies depending on the priority of a specific material. In the resolved resonance region we primarily used resonance parameter uncertainties given in Atlas of Neutron Resonances and either applied the kernel approximation to propagate these uncertainties into cross section uncertainties or resorted to simplified estimates based on integral quantities. For several priority materials we adopted MF32 covariances produced by SAMMY at ORNL, modified by us by adding MF33 covariances to account for systematic uncertainties. In the fast neutron region we resorted to three methods. The most sophisticated was EMPIRE-KALMAN method which combines experimental data from EXFOR library with nuclear reaction modeling and least-squares fitting. The two other methods used simplified estimates, either based on the propagation of nuclear reaction model parameter uncertainties or on a dispersion analysis of central cross section values in recent evaluated data files. All covariances were subject to quality assurance procedures adopted recently by CSEWG. In addition, tools were developed to allow inspection of processed covariances and computed integral quantities, and for comparing these values to data from the Atlas and the astrophysics database KADoNiS.
Koczkodaj, Waldemar W; Kułakowski, Konrad; Ligęza, Antoni
2014-01-01
Comparison, rating, and ranking of alternative solutions, in case of multicriteria evaluations, have been an eternal focus of operations research and optimization theory. There exist numerous approaches at practical solving the multicriteria ranking problem. The recent focus of interest in this domain was the event of parametric evaluation of research entities in Poland. The principal methodology was based on pairwise comparisons. For each single comparison, four criteria have been used. One of the controversial points of the assumed approach was that the weights of these criteria were arbitrary. The main focus of this study is to put forward a theoretically justified way of extracting weights from the opinions of domain experts. Theoretical bases for the whole procedure are based on a survey and its experimental results. Discussion and comparison of the two resulting sets of weights and the computed inconsistency indicator are discussed.
NASA Astrophysics Data System (ADS)
Bianchi, Davide; Percival, Will J.; Bel, Julien
2016-12-01
We develop a model for the redshift-space correlation function, valid for both dark matter particles and haloes on scales >5 h-1 Mpc. In its simplest formulation, the model requires the knowledge of the first three moments of the line-of-sight pairwise velocity distribution plus two well-defined dimensionless parameters. The model is obtained by extending the Gaussian-Gaussianity prescription for the velocity distribution, developed in a previous paper, to a more general concept allowing for local skewness, which is required to match simulations. We compare the model with the well-known Gaussian streaming model and the more recent Edgeworth streaming model. Using N-body simulations as a reference, we show that our model gives a precise description of the redshift-space clustering over a wider range of scales. We do not discuss the theoretical prescription for the evaluation of the velocity moments, leaving this topic to further investigation.
Plant lock and ant key: pairwise coevolution of an exclusion filter in an ant-plant mutualism.
Brouat, C.; Garcia, N.; Andary, C.; McKey, D.
2001-01-01
Although observations suggest pairwise coevolution in specific ant-plant symbioses, coevolutionary processes have rarely been demonstrated. We report on, what is to the authors' knowledge, the strongest evidence yet for reciprocal adaptation of morphological characters in a species-specific ant-plant mutualism. The plant character is the prostoma, which is a small unlignified organ at the apex of the domatia in which symbiotic ants excavate an entrance hole. Each myrmecophyte in the genus Leonardoxa has evolved a prostoma with a different shape. By performing precise measurements on the prostomata of three related myrmecophytes, on their specific associated ants and on the entrance holes excavated by symbiotic ants at the prostomata, we showed that correspondence of the plant and ant traits forms a morphological and behavioural filter. We have strong evidence for coevolution between the dimensions and shape of the symbiotic ants and the prostoma in one of the three ant-Leonardoxa associations. PMID:11600077
NASA Astrophysics Data System (ADS)
Boll, D. I. R.; Fojón, O. A.
2017-03-01
We study the single photoionization of simple diatomic molecules such as {{{H}}}2+ by a train of attopulses assisted by a near infrared laser. In particular, we focus on the so called orbital parity mix interferences leading to asymmetrical electron emission. We employ a non-perturbative model obtaining for those asymmetries analytical expressions with a functional form independent of the target structure encoding the interaction of the photoelectron with the laser field to all orders. Related to these interferences, we give conditions at which a pairwise cancellation of channels opened by the laser field occurs. Finally, we exploit the non-perturbative character of our model to analyze the dependence of the asymmetrical electron emission and the angular distribution of photoelectrons with the laser intensity. An asymmetric inhibition of the emission in the classical direction is found.
Neutron Resonance Parameters and Covariance Matrix of 239Pu
Derrien, Herve; Leal, Luiz C; Larson, Nancy M
2008-08-01
In order to obtain the resonance parameters in a single energy range and the corresponding covariance matrix, a reevaluation of 239Pu was performed with the code SAMMY. The most recent experimental data were analyzed or reanalyzed in the energy range thermal to 2.5 keV. The normalization of the fission cross section data was reconsidered by taking into account the most recent measurements of Weston et al. and Wagemans et al. A full resonance parameter covariance matrix was generated. The method used to obtain realistic uncertainties on the average cross section calculated by SAMMY or other processing codes was examined.